{"title":"NUMFL: Localizing Faults in Numerical Software Using a Value-Based Causal Model","authors":"Zhuofu Bai, Gang Shu, Andy Podgurski","doi":"10.1109/ICST.2015.7102597","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102597","url":null,"abstract":"We present NUMFL, a value-based causal inference model for localizing faults in numerical software. NUMFL combines causal and statistical analyses to characterize the causal effects of individual numerical expressions on failures. Given value-profiles for an expression's variables, NUMFL uses generalized propensity scores (GPSs) to reduce confounding bias caused by evaluation of other, faulty expressions. It estimates the average failure-causing effect of an expression using quadratic regression models fit within GPS subclasses. We report on an evaluation of NUMFL with components from four Java numerical libraries, in which it was compared to five alternative statistical fault localization metrics. The results indicate that NUMFL is the most effective technique overall.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130229083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Herbold, A. D. Francesco, J. Grabowski, Patrick Harms, L. Hillah, F. Kordon, Ariele-Paolo Maesano, Libero Maesano, C. Napoli, F. Rosa, Martin A. Schneider, N. Tonellotto, Marc-Florian Wendland, Pierre-Henri Wuillemin
{"title":"The MIDAS Cloud Platform for Testing SOA Applications","authors":"S. Herbold, A. D. Francesco, J. Grabowski, Patrick Harms, L. Hillah, F. Kordon, Ariele-Paolo Maesano, Libero Maesano, C. Napoli, F. Rosa, Martin A. Schneider, N. Tonellotto, Marc-Florian Wendland, Pierre-Henri Wuillemin","doi":"10.1109/ICST.2015.7102636","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102636","url":null,"abstract":"While Service Oriented Architectures (SOAs) are for many parts deployed online, and today often in a cloud, the testing of the systems still happens mostly locally. In this paper, we want to present the MIDAS Testing as a Service (TaaS), a cloud platform for the testing of SOAs. We focus on the testing of whole SOA orchestrations, a complex task due to the number of potential service interactions and the increasing complexity with each service that joins an orchestration. Since traditional testing does not scale well with such a complex setup, we employ a Model-based Testing (MBT) approach based on the Unified Modeling Language (UML) and the UML Testing Profile (UTP) within MIDAS. Through this, we provide methods for functional testing, security testing, and usage-based testing of service orchestrations. Through harnessing the computational power of the cloud, MIDAS is able to generate and execute complex test scenarios which would be infeasible to run in a local environment.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128678213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"JSEFT: Automated Javascript Unit Test Generation","authors":"Shabnam Mirshokraie, A. Mesbah, K. Pattabiraman","doi":"10.1109/ICST.2015.7102595","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102595","url":null,"abstract":"The event-driven and highly dynamic nature of JavaScript, as well as its runtime interaction with the Document Object Model (DOM) make it challenging to test JavaScript-based applications. Current web test automation techniques target the generation of event sequences, but they ignore testing the JavaScript code at the unit level. Further they either ignore the oracle problem completely or simplify it through generic soft oracles such as HTML validation and runtime exceptions. We present a framework to automatically generate test cases for JavaScript applications at two complementary levels, namely events and individual JavaScript functions. Our approach employs a combination of function coverage maximization and function state abstraction algorithms to efficiently generate test cases. In addition, these test cases are strengthened by automatically generated mutation-based oracles. We empirically evaluate the implementation of our approach, called JSEFT, to assess its efficacy. The results, on 13 JavaScript-based applications, show that the generated test cases achieve a coverage of 68% and that JSEFT can detect injected JavaScript and DOM faults with a high accuracy (100% precision, 70% recall). We also find that JSEFT outperforms an existing JavaScript test automation framework both in terms of coverage and detected faults.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121013307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Noguchi, H. Washizaki, Y. Fukazawa, Atsutoshi Sato, Kenichiro Ota
{"title":"History-Based Test Case Prioritization for Black Box Testing Using Ant Colony Optimization","authors":"T. Noguchi, H. Washizaki, Y. Fukazawa, Atsutoshi Sato, Kenichiro Ota","doi":"10.1109/ICST.2015.7102622","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102622","url":null,"abstract":"Test case prioritization is a technique to improve software testing. Although a lot of work has investigated test case prioritization, they focus on white box testing or regression testing. However, software testing is often outsourced to a software testing company, in which testers are rarely able to access to source code due to a contract. Herein a framework is proposed to prioritize test cases for black box testing on a new product using the test execution history collected from a similar prior product and the Ant Colony Optimization. A simulation using two actual products shows the effectiveness and practicality of our proposed framework.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127051785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mian Wan, Yuchen Jin, Ding Li, Jiaping Gui, Sonal Mahajan, William G. J. Halfond
{"title":"Detecting Display Energy Hotspots in Android Apps","authors":"Mian Wan, Yuchen Jin, Ding Li, Jiaping Gui, Sonal Mahajan, William G. J. Halfond","doi":"10.1002/stvr.1635","DOIUrl":"https://doi.org/10.1002/stvr.1635","url":null,"abstract":"Energy consumption of mobile apps has become an important consideration as the underlying devices are constrained by battery capacity. Display represents a significant portion of an app's energy consumption. However, developers lack techniques to identify the user interfaces in their apps for which energy needs to be improved. In this paper, we present a technique for detecting display energy hotspots - user interfaces of a mobile app whose energy consumption is greater than optimal. Our technique leverages display power modeling and automated display transformation techniques to detect these hotspots and prioritize them for developers. In an evaluation on a set of popular Android apps, our technique was very accurate in both predicting energy consumption and ranking the display energy hotspots. Our approach was also able to detect display energy hotspots in 398 Android market apps, showing its effectiveness and the pervasiveness of the problem. These results indicate that our approach represents a potentially useful technique for helping developers to detect energy related problems and reduce the energy consumption of their mobile apps.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130435878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maurizio Leotta, Andrea Stocco, F. Ricca, P. Tonella
{"title":"Using Multi-Locators to Increase the Robustness of Web Test Cases","authors":"Maurizio Leotta, Andrea Stocco, F. Ricca, P. Tonella","doi":"10.1109/ICST.2015.7102611","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102611","url":null,"abstract":"The main reason for the fragility of web test cases is the inability of web element locators to work correctly when the web page DOM evolves. Web elements locators are used in web test cases to identify all the GUI objects to operate upon and eventually to retrieve web page content that is compared against some oracle in order to decide whether the test case has passed or not. Hence, web element locators play an extremely important role in web testing and when a web element locator gets broken developers have to spend substantial time and effort to repair it. While algorithms exist to produce robust web element locators to be used in web test scripts, no algorithm is perfect and different algorithms are exposed to different fragilities when the software evolves. Based on such observation, we propose a new type of locator, named multi-locator, which selects the best locator among a candidate set of locators produced by different algorithms. Such selection is based on a voting procedure that assigns different voting weights to different locator generation algorithms. Experimental results obtained on six web applications, for which a subsequent release was available, show that the multi-locator is more robust than the single locators (about -30% of broken locators w.r.t. the most robust kind of single locator) and that the execution overhead required by the multiple queries done with different locators is negligible (2-3% at most).","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117211931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing Legacy Embedded Code: Landing on a Software Engineering Desert Island","authors":"M. Oriol","doi":"10.1109/ICST.2015.7102634","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102634","url":null,"abstract":"Research on software engineering typically focuses on mainstream languages such as Java, .NET, and C. It is validated using projects easily executable and deployable on a desktop machine. Real, embedded, legacy code is however seldom made of such clean code. This article presents such a case. We performed the analysis and testing of legacy code, which is mix of C and DSP assembly. Such combinations of technologies cannot be analyzed by regular software engineering tools, creating a de facto software engineering desert island. Our solution relies on writing a parser for the DSP code, static analyzers, and using integration test cases. To run the tests, we also automate deployment on the target hardware and run the tests from an integration server.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124378599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pavneet Singh Kochhar, Ferdian Thung, Nachiappan Nagappan, Thomas Zimmermann, D. Lo
{"title":"Understanding the Test Automation Culture of App Developers","authors":"Pavneet Singh Kochhar, Ferdian Thung, Nachiappan Nagappan, Thomas Zimmermann, D. Lo","doi":"10.1109/ICST.2015.7102609","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102609","url":null,"abstract":"Smartphone applications (apps) have gained popularity recently. Millions of smartphone applications (apps) are available on different app stores which gives users plethora of options to choose from, however, it also raises concern if these apps are adequately tested before they are released for public use. In this study, we want to understand the test automation culture prevalent among app developers. Specifically, we want to examine the current state of testing of apps, the tools that are commonly used by app developers, and the problems faced by them. To get an insight on the test automation culture, we conduct two different studies. In the first study, we analyse over 600 Android apps collected from F- Droid, one of the largest repositories containing information about open-source Android apps. We check for the presence of test cases and calculate code coverage to measure the adequacy of testing in these apps. We also survey developers who have hosted their applications on GitHub to understand the testing practices followed by them. We ask developers about the tools that they use and ''pain points'' that they face while testing Android apps. For the second study, based on the responses from Android developers, we improve our survey questions and resend it to Windows app developers within Microsoft. We conclude that many Android apps are poorly tested - only about 14% of the apps contain test cases and only about 9% of the apps that have executable test cases have coverage above 40%. Also, we find that Android app developers use automated testing tools such as JUnit, Monkeyrunner, Robotium, and Robolectric, however, they often prefer to test their apps manually, whereas Windows app developers prefer to use in-house tools such as Visual Studio and Microsoft Test Manager. Both Android and Windows app developers face many challenges such as time constraints, compatibility issues, lack of exposure, cumbersome tools, etc. We give suggestions to improve the test automation culture in the growing app community.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129656362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fluently Reading, Writing and Speaking Hexadecimal with Gepetto's Help","authors":"Daniel Werner","doi":"10.1109/ICST.2015.7102621","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102621","url":null,"abstract":"Many engineers are exposed to binary data. These can be files or data exchanged over network links. When involved in the verification and validation of systems that deal with specific protocols or binary data storage, it is often tedious to analyse the hexadecimal dumps in order to find specific parameters of interest. Despite detailed protocol specifications, it takes a lot of manual effort to inspect byte after byte. This is not only a laborious work, but it is also very error-prone, especially when messages are very complex, contain mixtures of big- and little-endianess, timestamps, ASCII, Unicode, base-64 images, calibrated data and others... Furthermore, when having large amount of data, it isn't straight forward to extract all the parameters of interest for offline data correlation or analysis. Last, but not least, there is today no generic test tool that allows to autonomously interpret and respond to any protocol written over the Internet Transmission Control Protocol (TCP/IP). Indeed, many various file formats and protocols exist, but writing a new tool for each of them, by having the format definitions hard-coded, is not always very efficient. A lot of time and money is wasted as people keep re-inventing the wheel again and again. Gepetto (the GEneric Processing Editing and Testing TOol) tries to provide a solution to this problematic.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134509537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model-Based Continuous Integration Testing of Responsiveness of Web Applications","authors":"G. Brajnik, A. Baruzzo, S. Fabbro","doi":"10.1109/ICST.2015.7102626","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102626","url":null,"abstract":"The problem we tackle deals with testing responsiveness of Web applications across different user platforms, which are combinations of browsers, operating systems, devices and device orientation. Because of the variety of such environmental conditions, the behavior of the Web application (the application under test, AUT) can differ widely. Failure modes such as buttons not being visible, feedback boxes located in the wrong position, user interface (UI) components not behaving properly are very common and may affect only some of the platforms. Even when practitioners adopt services such as Browserstack that provide virtualization of user platforms, identification of such defects is very demanding, and as a consequence it is seldom performed in a continuous integration fashion. Unfortunately, layout requirements of UIs tend to change very frequently, and the result is that changes are often not tested well enough. What is needed is a software test automation solution. We tackled this problem by adopting a model-based approach, centered on models that represent the data manipulated by the UI and its behavior. A compiler reads these models, integrates model annotations providing details about UI implementation and about test oracles, and produces Java source code of a test harness. The test harness hides details about the Document Object Model (DOM), about the particular test driver that is used to drive the AUT, and details about the oracles that can be used to check if the AUT satisfies certain properties, such as those dealing with its layout. The test engineer can write high level code for jUnit/TestNG test cases, which can then be run on all the desired platforms, using Selenium as a driver.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130977168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}