Yu-Lun Huang, Borting Chen, Ming-Wei Shih, Chien-Yu Lai
{"title":"Security Impacts of Virtualization on a Network Testbed","authors":"Yu-Lun Huang, Borting Chen, Ming-Wei Shih, Chien-Yu Lai","doi":"10.1109/SERE.2012.17","DOIUrl":"https://doi.org/10.1109/SERE.2012.17","url":null,"abstract":"Modern virtualization technologies provides an optimal use of underused hardware resources by sharing them among virtual machines hosted on the same physical machine. These technologies hence have been broadly adopted in many areas, such as server consolidation, virtualized network test beds, etc. A large-scale network test bed is considered one of the useful tools for evaluating or verifying advanced networking technologies. To construct a network test bed that matches as much as a real setup, the test bed should meet requirements of isolation, fidelity, repeatability, scalability, containment and extensibility. Among these requirements, scalability can be realized by the modern virtualization technology, whereas vulnerability and security weakness brought along with virtualization can also be harmful to the other requirements of a network test bed, like isolation and fidelity. This paper reviews the modern virtualization technologies, their resource management mechanisms, and the known attacks of these virtualization technologies. Then, we discuss the requirements of existing network test beds and the security impacts when introducing such modern virtualization technologies into a network test bed.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132555370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinhee Park, Hyeon-Jeong Kim, Ju-Hwan Shin, Jongmoon Baik
{"title":"An Embedded Software Reliability Model with Consideration of Hardware Related Software Failures","authors":"Jinhee Park, Hyeon-Jeong Kim, Ju-Hwan Shin, Jongmoon Baik","doi":"10.1109/SERE.2012.10","DOIUrl":"https://doi.org/10.1109/SERE.2012.10","url":null,"abstract":"As software in an embedded system has taken charge of controlling both software and hardware components, the importance of estimating more accurate reliability for such software has been increased. To estimate the reliability of target software systems, software reliability models are often utilized with software failure data. Since software and hardware are highly co-related and frequently interact with each other in embedded systems, both of them are contributing factors to software failures. Thus, the influence of software and hardware faults on software failures should be taken account for to estimate software reliability. However, many researchers have developed software reliability models assuming that software failures are caused by only software faults, which might lead to inaccurate reliability estimation. In this paper, we suggest two new reliability models considering software and hardware faults as root causes of software failures for embedded software reliability estimation. The proposed models are compared with existing models for validity, and analysis results of the models with real project data are presented. The experimental results show that a Weibull based model, which takes characteristics of hardware degradation into account, has higher fitting-adequacy and superior accuracy for software reliability estimation. Through these results, the proposed model provides more accurate software reliability estimation and helps setting better testing strategies in the earlier phases of the embedded software testing.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121219055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Wavelet Shrinkage Estimation without Data Transform for Software Reliability Assessment","authors":"Xiao Xiao, T. Dohi","doi":"10.1109/SERE.2012.34","DOIUrl":"https://doi.org/10.1109/SERE.2012.34","url":null,"abstract":"Since software failure occurrence process is well-modeled by a non-homogeneous Poisson process, it is of great interest to estimate accurately the software intensity function from observed software-fault count data. In the existing work the same authors introduced the wavelet-based techniques for this problem and found that the Haar wavelet transform provided a very powerful performance in estimating software intensity function. In this paper, we also study the Haar-wavelet-transform-based approach, but without using approximate transformations. In numerical study with real software-fault count data, we compare the proposed robust estimation with the existing wavelet-based estimation as well as the conventional maximum likelihood estimation and least squares estimation methods.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129262347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"µTIL: Mutation-based Statistical Test Inputs Generation for Automatic Fault Localization","authors":"Mickaël Delahaye, L. Briand, A. Gotlieb, M. Petit","doi":"10.1109/SERE.2012.32","DOIUrl":"https://doi.org/10.1109/SERE.2012.32","url":null,"abstract":"Automatic Fault Localization (AFL) is a process to locate faults automatically in software programs. Essentially, an AFL method takes as input a set of test cases including failed test cases, and ranks the statements of a program from the most likely to the least likely to contain a fault. As a result, the efficiency of an AFL method depends on the \"quality\" of the test cases used to rank statements. More specifically, in order to improve the accuracy of their ranking within test budget constraints, we have to ensure that program statements are executed by a reasonably large number of test cases which provide a coverage as uniform as possible of the input domain. This paper proposes μTIL, a new statistical test inputs generation method dedicated to AFL, based on constraint solving and mutation testing. Using mutants where the locations of injected faults are known, μTIL is able to significantly reduce the length of an AFL test suite while retaining its accuracy (i.e., the code size to examine before spotting the fault). In order to address the motivations stated above, the statistical generator objectives are two-fold: 1) each feasible path of the program is activated with the same probability, 2) the sub domain associated to each feasible path is uniformly covered. Using several widely used ranking techniques (i.e., Tarantula, Jaccard, Ochiai), we show on a small but realistic program that a proof-of-concept implementation of μTIL can generate test sets with significantly better fault localization accuracy than both random testing and adaptive random testing. We also show on the same program that using mutation testing enables a 75% length reduction of the AFL test suite without decrease in accuracy.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116712537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VRank: A Context-Aware Approach to Vulnerability Scoring and Ranking in SOA","authors":"Jianchun Jiang, Liping Ding, Ennan Zhai, Ting Yu","doi":"10.1109/SERE.2012.16","DOIUrl":"https://doi.org/10.1109/SERE.2012.16","url":null,"abstract":"With the rapid adoption of the concepts of Service Oriented Architecture (SOA), sophisticated business processes and tasks are increasingly realized through composing distributed software components offered by different providers. Though such practices offer advantages in terms of cost-effectiveness and flexibility, those components are not immune to vulnerabilities. It is therefore important for the administrator of some composed service to evaluate the threats of such vulnerabilities accordingly within limited available information. Since almost all the existing efforts (e.g., CVSS) fail to consider specific context-aware information which is the specific character of SOA, they could not be adopted into SOA for scoring vulnerabilities. In this paper, we present VRank, a novel framework for the scoring and ranking of vulnerabilities in SOA. Different from existing efforts, for a given vulnerability, VRank not only considers its intrinsic properties (e.g., exploitability), but also takes into account the contexts of the services having this vulnerability, e.g., what roles they play in the composed service and how critical it is to the security objective of the service. The resulting scoring and ranking of vulnerabilities are thus highly relevant and meaningful to the composed service. We present the detailed design of VRank, and compare it with CVSS. Our experiments indicate VRank is able to provide much more useful ranking lists of vulnerabilities for complex composed services.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116343713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hehua Zhang, Yu Jiang, X. Jiao, Xiaoyu Song, W. Hung, M. Gu
{"title":"Reliability Analysis of PLC Systems by Bayesian Network","authors":"Hehua Zhang, Yu Jiang, X. Jiao, Xiaoyu Song, W. Hung, M. Gu","doi":"10.1109/SERE.2012.26","DOIUrl":"https://doi.org/10.1109/SERE.2012.26","url":null,"abstract":"Reliability analysis is important in the life cycle of safety critical Programmable Logic Controller (PLC) system. The complexity of PLC system reliability analysis arises in handling the complex relations between hardware components and embedded software. Different embedded software may lead to different arrangements of hardware execution and different system reliability quantities. In this paper, we propose a novel probabilistic model, named hybrid relation model (HRM), for the reliability analysis of PLC systems. It is constructed based on the distribution of the hardware components and the execution logic of the embedded software. We map the hardware components to the HRM nodes and embed the failure probabilities of them into the well defined conditional probability distribution tables of the HRM nodes. Then, HRM model handles the failure probability of each hardware component as well as the complex relations caused by the execution logic of the embedded software, with the computational mechanism of Bayesian Network. Experiment results demonstrate the accuracy of our model.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131821284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software Fault Localization Using DStar (D*)","authors":"W. E. Wong, V. Debroy, Yihao Li, Ruizhi Gao","doi":"10.1109/SERE.2012.12","DOIUrl":"https://doi.org/10.1109/SERE.2012.12","url":null,"abstract":"Effective debugging is crucial to producing dependable software. Manual debugging is becoming prohibitively expensive, especially due to the growing size and complexity of programs. Given that fault localization is one of the most expensive activities in program debugging, there has been a great demand for fault localization techniques that can help guide programmers to the locations of faults. In this paper a technique named DStar (D*), which has its origins rooted in similarity coefficient-based analysis, is proposed, which can identify suspicious locations for fault localization automatically without requiring any prior information on program structure or semantics. D* is evaluated across 21 programs and is compared to 16 different fault localization techniques. Both single-fault and multi-fault programs are used. Results indicate that D* is more effective at locating faults than all the other techniques it is compared to.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125558448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rahul Murmuria, Jeffrey Medsger, A. Stavrou, J. Voas
{"title":"Mobile Application and Device Power Usage Measurements","authors":"Rahul Murmuria, Jeffrey Medsger, A. Stavrou, J. Voas","doi":"10.1109/SERE.2012.19","DOIUrl":"https://doi.org/10.1109/SERE.2012.19","url":null,"abstract":"Reducing power consumption has become a crucial design tenet for both mobile and other small computing devices that are not constantly connected to a power source. However, unlike devices that have a limited and predefined set of functionality, recent mobile smart phone devices have a very rich set of components and can handle multiple general purpose programs that are not a-priori known or profiled. In this paper, we present a general methodology for collecting measurements and modelling power usage on smart phones. Our goal is to characterize the device subsystems and perform accurate power measurements. We implemented a system that effectively accounts for the power usage of all of the primary hardware subsystems on the phone: CPU, display, graphics, GPS, audio, microphone, and Wi-Fi. To achieve that, we make use of the per-subsystem time shares reported by the operating system's power-management module. We present the models capability to further calculate the power consumption of individual applications given measurements, and also the feasibility of our model to operate in real-time and without significant impact in the power footprint of the devices we monitor.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127734283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CRAX: Software Crash Analysis for Automatic Exploit Generation by Modeling Attacks as Symbolic Continuations","authors":"Shih-Kun Huang, Min-Hsiang Huang, Po-Yen Huang, Chung-Wei Lai, Han-Lin Lu, Wai-Meng Leong","doi":"10.1109/SERE.2012.20","DOIUrl":"https://doi.org/10.1109/SERE.2012.20","url":null,"abstract":"We present a simple framework capable of automatically generating attacks that exploit control flow hijacking vulnerabilities. We analyze given software crashes and perform symbolic execution in concolic mode, using a whole system environment model. The framework uses an end-to-end approach to generate exploits for various applications, including 16 medium scale benchmark programs, and several large scale applications, such as Mplayer (a media player), Unrar (an archiver) and Foxit(a pdf reader), with stack/heap overflow, off-by-one overflow, use of uninitialized variable, format string vulnerabilities. Notably, these applications have been typically regarded as fuzzing preys, but still require a manual process with security knowledge to produce mitigation-hardened exploits. Using our system to produce exploits is a fully automated and straightforward process for crashed software without source. We produce the exploits within six minutes for medium scale of programs, and as long as 80 minutes for mplayer (about 500,000 LOC), after constraint reductions. Our results demonstrate that the link between software bugs and security vulnerabilities can be automatically bridged.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123339606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Precise Information Flow Measure from Imprecise Probabilities","authors":"Sari Haj Hussein","doi":"10.1109/SERE.2012.25","DOIUrl":"https://doi.org/10.1109/SERE.2012.25","url":null,"abstract":"Dempster-Shafer theory of imprecise probabilities has proved useful to incorporate both nonspecificity and conflict uncertainties in an inference mechanism. The traditional Bayesian approach cannot differentiate between the two, and is unable to handle non-specific, ambiguous, and conflicting information without making strong assumptions. This paper presents a generalization of a recent Bayesian-based method of quantifying information flow in Dempster-Shafer theory. The generalization concretely enhances the original method removing all its weaknesses that are highlighted in this paper. In so many words, our generalized method can handle any number of secret inputs to a program, it enables the capturing of an attacker's beliefs in all kinds of sets (singleton or not), and it supports a new and precise quantitative information flow measure whose reported flow results are plausible in that they are bounded by the size of a program's secret input, and can be easily associated with the exhaustive search effort needed to uncover a program's secret information, unlike the results reported by the original metric.","PeriodicalId":191716,"journal":{"name":"2012 IEEE Sixth International Conference on Software Security and Reliability","volume":"799 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116139278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}