2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing最新文献

筛选
英文 中文
Generation of Mixed Broadside and Skewed-Load Diagnostic Test Sets for Transition Faults 过渡故障宽侧和斜载混合诊断试验集的生成
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.15
I. Pomeranz
{"title":"Generation of Mixed Broadside and Skewed-Load Diagnostic Test Sets for Transition Faults","authors":"I. Pomeranz","doi":"10.1109/PRDC.2011.15","DOIUrl":"https://doi.org/10.1109/PRDC.2011.15","url":null,"abstract":"This paper describes a diagnostic test generation procedure for transition faults that produces mixed test sets consisting of broadside and skewed-load tests. A mix of broadside and skewed-load tests yields improved diagnostic resolution compared with a single test type. The procedure starts from a mixed test set generated for fault detection. It uses two procedures to obtain new tests that are useful for diagnosis starting from existing tests. Both procedures allow the type of a test to be modified (from broadside to skewed-load and from skewed-load to broadside). The first procedure is fault independent. The second procedure targets specific fault pairs. Experimental results show that diagnostic test generation changes the mix of broadside and skewed-load tests in the test set compared with a fault detection test set.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127271347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Task Mapping and Partition Allocation for Mixed-Criticality Real-Time Systems 混合临界实时系统的任务映射与分区分配
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.42
D. Tamas-Selicean, P. Pop
{"title":"Task Mapping and Partition Allocation for Mixed-Criticality Real-Time Systems","authors":"D. Tamas-Selicean, P. Pop","doi":"10.1109/PRDC.2011.42","DOIUrl":"https://doi.org/10.1109/PRDC.2011.42","url":null,"abstract":"In this paper we address the mapping of mixed-criticality hard real-time applications on distributed embedded architectures. We assume that the architecture provides both spatial and temporal partitioning, thus enforcing enough separation between applications. With temporal partitioning, each application runs in a separate partition, and each partition is allocated several time slots on the processors where the application is mapped. The sequence of time slots for all the applications on a processor are grouped within a Major Frame, which is repeated periodically. We assume that the applications are scheduled using static-cyclic scheduling. We are interested to determine the task mapping to processors, and the sequence and size of the time slots within the Major Frame on each processor, such that the applications are schedulable. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126445256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Correcting DFT Codes with Modified Berlekamp-Massey Algorithm and Syndrome Extension 用改进的Berlekamp-Massey算法校正DFT码
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.39
G. Redinbo
{"title":"Correcting DFT Codes with Modified Berlekamp-Massey Algorithm and Syndrome Extension","authors":"G. Redinbo","doi":"10.1109/PRDC.2011.39","DOIUrl":"https://doi.org/10.1109/PRDC.2011.39","url":null,"abstract":"Real number block codes derived from the discrete Fourier transform (DFT) are corrected by coupling a very modified Berlekamp-Massey algorithm with a syndrome extension process. Enhanced extension recursions based on Kalman syndrome extensions are examined.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117236151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Framework for Systematic Testing of Multi-threaded Applications 多线程应用系统测试框架
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.48
Mihai Florian
{"title":"A Framework for Systematic Testing of Multi-threaded Applications","authors":"Mihai Florian","doi":"10.1109/PRDC.2011.48","DOIUrl":"https://doi.org/10.1109/PRDC.2011.48","url":null,"abstract":"We present a framework that exhaustively explores the scheduling nondeterminism of multi-threaded applications and checks for concurrency errors. We use a flexible design that allows us to integrate multiple algorithms aimed at reducing the number of interleavings that have to be tested.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122882551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploiting Total Order Multicast in Weakly Consistent Transactional Caches 弱一致事务性缓存中全序多播的利用
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.21
P. Ruivo, Maria Couceiro, P. Romano, L. Rodrigues
{"title":"Exploiting Total Order Multicast in Weakly Consistent Transactional Caches","authors":"P. Ruivo, Maria Couceiro, P. Romano, L. Rodrigues","doi":"10.1109/PRDC.2011.21","DOIUrl":"https://doi.org/10.1109/PRDC.2011.21","url":null,"abstract":"Nowadays, distributed in-memory caches are increasingly used as a way to improve the performance of applications that require frequent access to large amounts of data. In order to maximize performance and scalability, these platforms typically rely on weakly consistent partial replication mechanisms. These schemes partition the data across the nodes and ensure a predefined (and typically very small) replication degree, thus maximizing the global memory capacity of the platform and ensuring that the cost to ensure replica consistency remains constant as the scale of the platform grows. Moreover, even though several of these platforms provide transactional support, they typically sacrifice consistency, ensuring guarantees that are weaker than classic 1-copy serializability, but that allow for more efficient implementations. This paper proposes and evaluates two partial replication techniques, providing different (weak) consistency guarantees, but having in common the reliance on total order multicast primitives to serialize transactions without incurring in distributed deadlocks, a main source of inefficiency of classical two-phase commit (2PC) based replication mechanisms. We integrate the proposed replication schemes into Infinispan, a prominent open-source distributed in-memory cache, which represents the reference clustering solution for the well-known JBoss AS platform. Our performance evaluation highlights speed-ups of up to 40x when using the proposed algorithms with respect to the native Infinispan replication mechanism, which relies on classic 2PC-based replication.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121899440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Test Generation and Computational Complexity 测试生成和计算复杂性
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.40
J. Sziray
{"title":"Test Generation and Computational Complexity","authors":"J. Sziray","doi":"10.1109/PRDC.2011.40","DOIUrl":"https://doi.org/10.1109/PRDC.2011.40","url":null,"abstract":"The paper is concerned with analyzing and comparing two exact algorithms from the viewpoint of computational complexity. They are: composite justification and the D-algorithm. Both serve for calculating fault-detection tests of digital circuits. As a result, it is pointed out that the composite justification requires significantly less computational step than the D-algorithm. From this fact it has been conjectured that possibly no other algorithm is available in this field with fewer computational steps. If the claim holds, then it follows directly that the test-generation problem is of exponential time, and so are all the other NP-complete problems in the field of computation theory.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133491005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Revisiting Fault-Injection Experiment-Platform Architectures 重新审视故障注入实验平台架构
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.46
Horst Schirmeier, Martin Hoffmann, R. Kapitza, D. Lohmann, O. Spinczyk
{"title":"Revisiting Fault-Injection Experiment-Platform Architectures","authors":"Horst Schirmeier, Martin Hoffmann, R. Kapitza, D. Lohmann, O. Spinczyk","doi":"10.1109/PRDC.2011.46","DOIUrl":"https://doi.org/10.1109/PRDC.2011.46","url":null,"abstract":"Many years of research on dependable, fault-tolerant software systems yielded a myriad of tool implementations for vulnerability analysis and experimental validation of resilience measures. Trace recording and fault injection are among the core functionalities these tools provide for hardware debuggers or system simulators, partially including some means to automate larger experiment campaigns. We argue that current fault-injection tools are too highly specialized for specific hardware devices or simulators, and are developed in poorly modularized implementations impeding evolution and maintenance. In this article, we present a novel design approach for a fault-injection infrastructure that allows experimenting researchers to switch simulator or hardware back ends with little effort, fosters experiment code reuse, and retains a high level of maintainability.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133058013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Trend Analyses of Accidents and Dependability Improvement in Financial Information Systems 金融信息系统事故趋势分析与可靠性改进
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.35
Koichi Bando, Kenji Tanaka
{"title":"Trend Analyses of Accidents and Dependability Improvement in Financial Information Systems","authors":"Koichi Bando, Kenji Tanaka","doi":"10.1109/PRDC.2011.35","DOIUrl":"https://doi.org/10.1109/PRDC.2011.35","url":null,"abstract":"In this paper, we analyze the trends of significant accidents in financial information systems from the user viewpoint. Based on the analyses, we show the priority issues for dependability improvement. First, as a prerequisite in this study, we define gaccidents, h gtypes of accidents, h gseverity of accidents, h and gfaults.h Second, we collected as many accident cases of financial information systems as possible during 12 years (1997-2008) from the information contained in four national major newspapers in Japan, news releases on websites, magazines, and books. Third, we analyzed the accident information according to type, severity, faults, and combinations of these factors. As a result, we showed the general trends of significant accidents. Last, based on the result of the analyses, we showed the priority issues for dependability improvement.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115215002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Malware Profiler Based on Innovative Behavior-Awareness Technique 基于创新行为感知技术的恶意软件分析器
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.53
Shih-Yao Dai, Fedor V. Yarochkin, S. Kuo, Ming-Wei Wu, Yennun Huang
{"title":"Malware Profiler Based on Innovative Behavior-Awareness Technique","authors":"Shih-Yao Dai, Fedor V. Yarochkin, S. Kuo, Ming-Wei Wu, Yennun Huang","doi":"10.1109/PRDC.2011.53","DOIUrl":"https://doi.org/10.1109/PRDC.2011.53","url":null,"abstract":"In order to steal valuable data, hackers are uninterrupted research and development new techniques to intrude computer systems. Opposite to hackers, security researchers are uninterrupted analysis and tracking new malicious techniques for protecting sensitive data. There are a lot of existing analyzers can be used to help security researchers to analyze and track new malicious techniques. However, these existing analyzers cannot provide sufficient information to security researchers to perform precise assessment and deep analysis. In this paper, we introduce a behavior-based malicious software profiler, named Holography platform, to assist security researchers to obtain sufficient information. Holography platform analyzes virtualization hardware data, including CPU instructions, CPU registers, memory data and disk data, to obtain high level behavior semantic of all running processes. High level behavior semantic can provide sufficient information to security researchers to perform precise assessment and deep analysis new malicious techniques, such as malicious advertisement attack(malvertising attack).","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Parametric Bootstrapping for Assessing Software Reliability Measures 评估软件可靠性措施的参数引导
2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing Pub Date : 2011-12-12 DOI: 10.1109/PRDC.2011.10
Toshio Kaneishi, T. Dohi
{"title":"Parametric Bootstrapping for Assessing Software Reliability Measures","authors":"Toshio Kaneishi, T. Dohi","doi":"10.1109/PRDC.2011.10","DOIUrl":"https://doi.org/10.1109/PRDC.2011.10","url":null,"abstract":"The bootstrapping is a statistical technique to replicate the underlying data based on the resampling, and enables us to investigate the statistical properties. It is useful to estimate standard errors and confidence intervals for complex estimators of complex parameters of the probability distribution from a small number of data. In software reliability engineering, it is common to estimate software reliability measures from the fault data (fault-detection time data) and to focus on only the point estimation. However, it is difficult in general to carry out the interval estimation or to obtain the probability distributions of the associated estimators, without applying any approximate method. In this paper, we assume that the software fault-detection process in the system testing is described by a non-homogeneous Poisson process, and develop a comprehensive technique to study the probability distributions on significant software reliability measures. Based on the maximum likelihood estimation, we assess the probability distributions of estimators such as the initial number of software faults remaining in the software, software intensity function, mean value function and software reliability function, via parametric bootstrapping method.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125798362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信