Loreto Gonzalez-Hernandez, B. Lindström, A. Offutt, S. F. Andler, P. Potena, M. Bohlin
{"title":"Using Mutant Stubbornness to Create Minimal and Prioritized Test Sets","authors":"Loreto Gonzalez-Hernandez, B. Lindström, A. Offutt, S. F. Andler, P. Potena, M. Bohlin","doi":"10.1109/QRS.2018.00058","DOIUrl":"https://doi.org/10.1109/QRS.2018.00058","url":null,"abstract":"In testing, engineers want to run the most useful tests early (prioritization). When tests are run hundreds or thousands of times, minimizing a test set can result in significant savings (minimization). This paper proposes a new analysis technique to address both the minimal test set and the test case prioritization problems. This paper precisely defines the concept of mutant stubbornness, which is the basis for our analysis technique. We empirically compare our technique with other test case minimization and prioritization techniques in terms of the size of the minimized test sets and how quickly mutants are killed. We used seven C language subjects from the Siemens Repository, specifically the test sets and the killing matrices from a previous study. We used 30 different orders for each set and ran every technique 100 times over each set. Results show that our analysis technique performed significantly better than prior techniques for creating minimal test sets and was able to establish new bounds for all cases. Also, our analysis technique killed mutants as fast or faster than prior techniques. These results indicate that our mutant stubbornness technique constructs test sets that are both minimal in size, and prioritized effectively, as well or better than other techniques.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131811077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Empirical Study of the Impact of Code Smell on File Changes","authors":"Can Zhu, Xiaofang Zhang, Yang Feng, Lin Chen","doi":"10.1109/QRS.2018.00037","DOIUrl":"https://doi.org/10.1109/QRS.2018.00037","url":null,"abstract":"Code smells are considered to have negative impacts on software evolution and maintenance. Many researchers have conducted studies to investigate these effects and correlations. However, because code smells constantly change in the evolution, understanding these changes and the correlation between them and the operations of source code files is helpful for developers in maintenance. In this paper, on four popular Java projects with 58 release versions, we conduct an extensive empirical study to investigate the correlation between code smells and basic operations of source code files. We find that, the density of code smells decreases with the software evolution. The files containing smells have a higher likelihood to be modified while smells are not strongly correlated with adding or removing files. Furthermore, some certain smells have significant impact on file changes. These findings are helpful for developers to understand the evolution of code smells and better focus on quality assurance.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129121575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Program Slicing-Based Bayesian Network Model for Change Impact Analysis","authors":"Ekincan Ufuktepe, Tugkan Tuglular","doi":"10.1109/QRS.2018.00062","DOIUrl":"https://doi.org/10.1109/QRS.2018.00062","url":null,"abstract":"Change impact analysis plays an important role in identifying potential affected areas that are caused by changes that are made in a software. Most of the existing change impact analysis techniques are based on architectural design and change history. However, source code-based change impact analysis studies are very few and they have shown higher precision in their results. In this study, a static method-granularity level change impact analysis, that uses program slicing and Bayesian Network technique has been proposed. The technique proposes a directed graph model that also represents the call dependencies between methods. In this study, an open source Java project with 8999 to 9445 lines of code and from 505 to 528 methods have been analyzed through 32 commits it went. Recall and f-measure metrics have been used for evaluation of the precision of the proposed method, where each software commit has been analyzed separately.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127424755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johannes Bräuer, Reinhold Plösch, Christian Körner, Matthias Saft
{"title":"On the Suitability of a Portfolio-Based Design Improvement Approach","authors":"Johannes Bräuer, Reinhold Plösch, Christian Körner, Matthias Saft","doi":"10.1109/QRS.2018.00038","DOIUrl":"https://doi.org/10.1109/QRS.2018.00038","url":null,"abstract":"The design debt metaphor tries to illustrate quality deficits in the design of a software and the impact thereof to the business value of the system. To pay off the debt, the literature offers various approaches for identifying and prioritizing these design flaws, but without proper support in aligning strategic improvement actions to the identified issues. This work addresses this challenge and examines the suitability of our proposed portfolio-based design assessment approach. Therefore, this investigation is conducted based on three case studies where the product source code was analyzed and assessed using our portfolio-based approach. As a result, the approach has proven to be able to recommend concrete and valuable design improvement actions that can be adapted to project constraints.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124069785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Analysis of Complex Industrial Test Code Using Clone Analysis","authors":"Wafa Hasanain, Y. Labiche, Sigrid Eldh","doi":"10.1109/QRS.2018.00061","DOIUrl":"https://doi.org/10.1109/QRS.2018.00061","url":null,"abstract":"Many companies, including Ericsson, experience increased software verification costs. Agile cross-functional teams find it easy to make new additions of test cases for every change and fix. The consequence of this phenomenon is duplications of test code. In this paper, we perform an industrial case study that aims at better understanding such duplicated test fragments or as we call them, clones. In our study, 49% (LOC) of the entire test code are clones. The reported results include figures about clone frequencies, types, similarity, fragments, and size distributions, and the number of line differences in cloned test cases. It is challenging to keep clones consistent and remove unnecessary clones during the entire testing process of large-scale commercial software.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134146718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Automatic Parameterized Verification of FLASH Cache Coherence Protocol","authors":"Yongjian Li, Jialun Cao, Kaiqiang Duan","doi":"10.1109/QRS.2018.00018","DOIUrl":"https://doi.org/10.1109/QRS.2018.00018","url":null,"abstract":"FLASH protocol is an industrial-scale cache coherence protocol, which is a challenging benchmark in the formal verification area. Verifying such protocol yields both scientific and commercial values. However, the complicated mechanism of protocols and the explosive searching states make it extremely hard to solve. An alternative solution is to carry out proof scripts combining manual work with a computer, which is adopted by most works in this area. However, this alternation makes the verification process neither effective nor rigorous. Therefore, in this paper, we elaborate the detailed process of how paraVerifier generates formal proofs automatically. It can generate a formal proof without manual works, and guarantee the rigorous correctness at the same time. Furthermore, we also illustrate the flow chart of READ and WRITE transactions in FLASH protocol, and analyze the semantics hiding behind the auto-searched invariants. We show that paraVerifier can not only automatically generate formal proofs, but offer comprehensive analyzing reports for better understanding.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"323 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133949543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Nhlabatsi, Jin B. Hong, Dong Seong Kim, Rachael Fernandez, Noora Fetais, K. Khan
{"title":"Spiral^SRA: A Threat-Specific Security Risk Assessment Framework for the Cloud","authors":"A. Nhlabatsi, Jin B. Hong, Dong Seong Kim, Rachael Fernandez, Noora Fetais, K. Khan","doi":"10.1109/QRS.2018.00049","DOIUrl":"https://doi.org/10.1109/QRS.2018.00049","url":null,"abstract":"Conventional security risk assessment approaches for cloud infrastructures do not explicitly consider risk with respect to specific threats. This is a challenge for a cloud provider because it may apply the same risk assessment approach in assessing the risk of all of its clients. In practice, the threats faced by each client may vary depending on their security requirements. The cloud provider may also apply generic mitigation strategies that are not guaranteed to be effective in thwarting specific threats for different clients. This paper proposes a threat-specific risk assessment framework which evaluates the risk with respect to specific threats by considering only those threats that are relevant to a particular cloud client. The risk assessment process is divided into three phases which have inter-related activities arranged in a spiral. Application of the framework to a cloud deployment case study shows that considering risk with respect to specific threats leads to a more accurate quantification of security risk. Although our framework is motivated by risk assessment challenges in the cloud it can be applied in any network environment.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133622346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shu-Hao Chang, S. Mallissery, Chih-Hao Hsieh, Yu-Sung Wu
{"title":"Hypervisor-Based Sensitive Data Leakage Detector","authors":"Shu-Hao Chang, S. Mallissery, Chih-Hao Hsieh, Yu-Sung Wu","doi":"10.1109/QRS.2018.00029","DOIUrl":"https://doi.org/10.1109/QRS.2018.00029","url":null,"abstract":"Sensitive Data Leakage (SDL) is a major issue faced by organizations due to increasing reliance on data-driven decision-making. Existing Data Leakage Prevention (DLP) solutions are being challenged by the adoption of network transport encryption and the presence of privileged-mode malware designed to tamper with the DLP agent programs. We propose a novel DLP system called \"HyperSweep\" that uses Virtual Machine Memory Introspection (VMI) technology to inspect the memory content of a guest system for sensitive information. The approach is robust against both network transport encryption and malware that attack DLP agent programs. The HyperSweep prototype is implemented on top of the KVM hypervisor. Our experiments have confirmed its applicability to real-world applications, including web browsers, office applications, and social networking applications. The experiments also indicate moderate performance overhead from applying HyperSweep.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134550384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Qin, Huiyan Wang, Chang Xu, Xiaoxing Ma, Jian Lu
{"title":"SynEva: Evaluating ML Programs by Mirror Program Synthesis","authors":"Yi Qin, Huiyan Wang, Chang Xu, Xiaoxing Ma, Jian Lu","doi":"10.1109/QRS.2018.00031","DOIUrl":"https://doi.org/10.1109/QRS.2018.00031","url":null,"abstract":"Machine learning (ML) programs are being widely used in various human-related applications. However, their testing always remains to be a challenging problem, and one can hardly decide whether and how the existing knowledge extracted from training scenarios suit new scenarios. Existing approaches typically have restricted usages due to their assumptions on the availability of an oracle, comparable implementation, or manual inspection efforts. We solve this problem by proposing a novel program synthesis based approach, SynEva, that can systematically construct an oracle-alike mirror program for similarity measurement, and automatically compare it with the existing knowledge on new scenarios to decide how the knowledge suits the new scenarios. SynEva is lightweight and fully automated. Our experimental evaluation with real-world data sets validates SynEva's effectiveness by strong correlation and little overhead results. We expect that SynEva can apply to, and help evaluate, more ML programs for new scenarios.","PeriodicalId":114973,"journal":{"name":"2018 IEEE International Conference on Software Quality, Reliability and Security (QRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117343923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}