Ryota Shioya, Daewung Kim, Kazuo Horio, M. Goshima, S. Sakai
{"title":"Low-Overhead Architecture for Security Tag","authors":"Ryota Shioya, Daewung Kim, Kazuo Horio, M. Goshima, S. Sakai","doi":"10.1109/PRDC.2009.30","DOIUrl":"https://doi.org/10.1109/PRDC.2009.30","url":null,"abstract":"A security-tagged architecture is one that applies tags on data to detect attack or information leakage, tracking data flow.The previous studies using security-tagged architecture mostly focused on how to utilize tags, not how the tags are implemented. A naive implementation of tags simply adds a tag field to every byte of the cache and the memory. Such technique, however, results in a huge hardware overhead.This paper proposes a low-overhead tagged architecture. We achieve our goal by exploiting some properties of tag, the non-uniformity and the locality of reference. Our design includes a use of uniquely designed multi-level table and various cache-like structures, all contributing to exploit these properties. Under simulation, our method was able to limit the memory overhead to 1.8%, where a naive implementation suffered 12.5% overhead.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124791640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing the Effectiveness of Penetration Testing and Static Code Analysis on the Detection of SQL Injection Vulnerabilities in Web Services","authors":"Nuno Antunes, M. Vieira","doi":"10.1109/PRDC.2009.54","DOIUrl":"https://doi.org/10.1109/PRDC.2009.54","url":null,"abstract":"Web services are becoming business-critical components that must provide a non-vulnerable interface to the client applications. However, previous research and practice show that many web services are deployed with critical vulnerabilities. SQL Injection vulnerabilities are particularly relevant, as web services frequently access a relational database using SQL commands. Penetration testing and static code analysis are two well-know techniques often used for the detection of security vulnerabilities. In this work we compare how effective these two techniques are on the detection of SQL Injection vulnerabilities in web services code. To understand the strengths and limitations of these techniques, we used several commercial and open source tools to detect vulnerabilities in a set of vulnerable services. Results suggest that, in general, static code analyzers are able to detect more SQL Injection vulnerabilities than penetration testing tools. Another key observation is that tools implementing the same detection approach frequently detect different vulnerabilities. Finally, many tools provide a low coverage and a high false positives rate, making them a bad option for programmers.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123208471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dependability Evaluation for Internet-Based Remote Systems","authors":"M. Kitakami, A. Katada, K. Namba, Hideo Ito","doi":"10.1109/PRDC.2009.47","DOIUrl":"https://doi.org/10.1109/PRDC.2009.47","url":null,"abstract":"Recently, the number of remote systems using the Internet has been increased and the services provided by such systems get various. They are required to have high dependability. The existing evaluations have some problems.For example, the evaluations based on RASIS are vague and those provided by Japanese government are very complicated. The existing evaluations are not uniformed, not understandable, and not quantitative.This paper proposes security evaluation metric which is a part of RASIS for remote systems using the Internet.The proposed metric gives quantitative evaluation in the similar manner of availability evaluation.It is based on the time that the system can tolerate take-over attack, which is one of the biggest threats among the attacks through the Internet.It can give appropriate system parameters to achieve the desired security.This paper applies it to example systems in order to confirm its effectiveness.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129599033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantifying Criticality of Dependability-Related IT Organization Processes in CobiT","authors":"T. Goldschmidt, Andreas Dittrich, M. Malek","doi":"10.1109/PRDC.2009.60","DOIUrl":"https://doi.org/10.1109/PRDC.2009.60","url":null,"abstract":"With ever-growing complexity of computer and communication systems analytical methods do not scale, especially with respect to dependability assessment of information technology (IT) organization. Generic reference models can be used as an alternative to analytical approaches by focusing on transforming qualitative assessment into quantitative evaluation of IT organization. In this paper, we examine the reference models IT Infrastructure Library (ITIL) and the Control Objectives for Information and Related Technology (CobiT) to derive a quantifiable concept for estimating the criticality of dependability-related IT organization processes in CobiT. After systematically analyzing ITIL processes and deriving properties that are relevant to dependability, those processes are mapped onto CobiT processes. Furthermore, we propose a process criticality index (PCI) which reflects the significance of each dependability-related process within a particular reference model. The PCI is based on the graph theory concept of betweenness centrality and uses a directed graph where nodes represent dependability-related processes and edges relations among them. Finally, using cycle and sequence analysis we are able to identify for every process which processes have to be implemented a priori. This provides an efficient strategy for implementing most significant processes first, according to the ranking based on the PCI.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129927017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Multiple-Round DOR Routing for 2D Network-on-Chip Meshes","authors":"Binzhang Fu, Yinhe Han, Huawei Li, Xiaowei Li","doi":"10.1109/PRDC.2009.50","DOIUrl":"https://doi.org/10.1109/PRDC.2009.50","url":null,"abstract":"The Network-on-Chip (NoC) meshes are limited by the reliability constraint, which impels us to exploit the fault tolerant routing. Particularly, one of the main design issues is minimizing the loss of non-faulty routers at the presence of faults. To address that problem, we propose a new fault tolerant routing, which has the following two distinct advantages: First, it keeps a network deadlock-free by utilizing restricted intermediate nodes rather than adding virtual channels (VC). This characteristic leads to an area-efficient router. Second, in the proposed routing algorithm, the rounds of DOR are not limited by the number of VC’s anymore. As a consequence, the number of sacrificed non-faulty routers is significantly reduced. We demonstrate above advantages through extensive simulations. The experimental results show that under the limitation of VC’s, the proposed routing algorithm always sacrifices the minimal number of non-faulty routers compared to previous solutions.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123962698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Built-In Self-Repair Techniques for Heterogeneous Memory Cores","authors":"Zhen-Yu Wang, Yi-Ming Tsai, Shyue-Kung Lu","doi":"10.1109/PRDC.2009.19","DOIUrl":"https://doi.org/10.1109/PRDC.2009.19","url":null,"abstract":"In this paper, BISR (built-in self-repair) techniques for heterogeneous multiple memory cores with divided redundancy mechanism are proposed. Redundant memories are partitioned into row blocks and column blocks and shared among all memory cores in the same memory group. Therefore, unlike the traditional redundancy mechanism, a row (column) block is used as the basic replacement element. Based on the proposed divided redundancy mechanism, a heuristic heterogeneous extended spare pivoting (HESP) redundancy analysis algorithm suitable for built-in implementation is also proposed. Experimental results show that repair rates can be improved significantly due to the efficient usage of redundancy. Moreover, the area overhead of the BISR circuitry for an example with four memory instances is only 1.12%.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115930007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Test Case Reuse Based on Ontology","authors":"Lizhi Cai, W. Tong, Zhenyu Liu, Juan Zhang","doi":"10.1109/PRDC.2009.25","DOIUrl":"https://doi.org/10.1109/PRDC.2009.25","url":null,"abstract":"Test cases are one of the most important assets in the testing process. This paper presents the testing ontology based SWEBOK and software quality model. The management and retrieval of test cases will play a vital role in test cases reuse. The keyword-based, as well as facet-based retrieval cannot meet user’s flexible query requirement because of lack of semantic information. SWEBOK provides a broad agreement on the content of the software engineering discipline. At last this paper discusses the management and retrieval of test cases based on the semantic similarity of two test concepts in two ontologies according to difference sets of super concept, sub concept, extension, and intension.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125434494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Couceiro, P. Romano, N. Carvalho, L. Rodrigues
{"title":"D2STM: Dependable Distributed Software Transactional Memory","authors":"Maria Couceiro, P. Romano, N. Carvalho, L. Rodrigues","doi":"10.1109/PRDC.2009.55","DOIUrl":"https://doi.org/10.1109/PRDC.2009.55","url":null,"abstract":"At current date the problem of how to build distributed and replicated Software Transactional Memory (STM) to enhance both dependability and performance is still largely unexplored. This paper fills this gap by presenting D2STM, a replicated STM whose consistency is ensured in a transparent manner, even in the presence of failures. Strong consistency is enforced at transaction commit time by a non-blocking distributed certification scheme, which we name BFC (Bloom Filter Certification). BFC exploits a novel Bloom Filter-based encoding mechanism that permits to significantly reduce the overheads of replica coordination at the cost of a user tunable increase in the probability of transaction abort. Through an extensive experimental study based on standard STM benchmarks we show that the BFC scheme permits to achieve remarkable performance gains even for negligible (e.g. 1%) increases of the transaction abort rate.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123848448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sang Min Lee, Dong Seong Kim, Y. Yoon, Jong Sou Park
{"title":"Quantitative Intrusion Intensity Assessment Using Important Feature Selection and Proximity Metrics","authors":"Sang Min Lee, Dong Seong Kim, Y. Yoon, Jong Sou Park","doi":"10.1109/PRDC.2009.29","DOIUrl":"https://doi.org/10.1109/PRDC.2009.29","url":null,"abstract":"The problem of previous approaches in anomaly detection in Intrusion Detection System (IDS) is to provide only binary detection result; intrusion or normal. This is a main cause of high false rates and inaccurate detection rates in IDS. In this paper, we propose a new approach named Quantitative Intrusion Intensity Assessment (QIIA). QIIA exploits feature selection and proximity metrics computation so that it provides intrusion (or normal) quantitative intensity value. It is capable of representing how an instance of audit data is proximal to intrusion or normal in the form of a numerical value. Prior to applying QIIA to audit data, we perform feature selection and parameters optimization of detection model in order not only to decrease the overheads to process audit data but also to enhance detection rates. QIIA then is performed using Random Forest (RF) and it generates proximity metrics which represent the intrusion intensity in a numerical way. The numerical values are used to determine whether unknown audit data is intrusion or normal. We carry out several experiments on KDD 1999 dataset and show the evaluation results.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131494972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Measurement System to Improve E-Business Capability of Human Resources in an Organizational Computing Environment","authors":"C. Yoon, Keon-Myung Lee, J. Jeon, In Sung Lee","doi":"10.1109/PRDC.2009.67","DOIUrl":"https://doi.org/10.1109/PRDC.2009.67","url":null,"abstract":"This study presents a measurement system that can efficiently gauge and interpret e-Business HR (e-HR) capability in an organizational computing environment. This system has a measurement and interpretation model with measurement stage, process, and method. The validity and reliability of the developed model construct were verified by factor and reliability analysis with the application of SPSS software. The application and utilization of the developed system were confirmed by applying it to a case study.","PeriodicalId":356141,"journal":{"name":"2009 15th IEEE Pacific Rim International Symposium on Dependable Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132872119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}