An Empirical Comparison of Two Different Strategies to Automated Fault Detection: Machine Learning Versus Dynamic Analysis

Rafig Almaghairbe, M. Roper
{"title":"An Empirical Comparison of Two Different Strategies to Automated Fault Detection: Machine Learning Versus Dynamic Analysis","authors":"Rafig Almaghairbe, M. Roper","doi":"10.1109/ISSREW.2019.00099","DOIUrl":null,"url":null,"abstract":"Software testing is an established method to ensure software quality and reliability, but it is an expensive process. In recent years, the automation of test case generation has received significant attention as a way to reduce costs. However, the oracle problem (a mechanism for determine the (in) correctness of an executed test case) is still major problem which has been largely ignored. Recent work has shown that building a test oracle using the principles of anomaly detection techniques (mainly semisupervised/ unsupervised learning models based on dynamic execution data consisting of an amalgamation of input/output pairs and execution traces) is able to demonstrate a reasonable level of success in automatically detect passing and failing execution [1], [2]. In this paper, we present a comparison study between our machine-learning based approaches and an existing techniques from the specification mining domain (the data invariant detector Daikon [3]). The two approaches are evaluated on a range of midsized systems and compared in terms of their fault detection ability. The results show that in most cases semi-supervised learning techniques perform far better as an automated test classifier than Daikon. However, there is one system for which our strategy struggles and Daikon performed far better. Furthermore, unsupervised learning techniques performed on a par when compared with Daikon in several cases.","PeriodicalId":166239,"journal":{"name":"2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSREW.2019.00099","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Software testing is an established method to ensure software quality and reliability, but it is an expensive process. In recent years, the automation of test case generation has received significant attention as a way to reduce costs. However, the oracle problem (a mechanism for determine the (in) correctness of an executed test case) is still major problem which has been largely ignored. Recent work has shown that building a test oracle using the principles of anomaly detection techniques (mainly semisupervised/ unsupervised learning models based on dynamic execution data consisting of an amalgamation of input/output pairs and execution traces) is able to demonstrate a reasonable level of success in automatically detect passing and failing execution [1], [2]. In this paper, we present a comparison study between our machine-learning based approaches and an existing techniques from the specification mining domain (the data invariant detector Daikon [3]). The two approaches are evaluated on a range of midsized systems and compared in terms of their fault detection ability. The results show that in most cases semi-supervised learning techniques perform far better as an automated test classifier than Daikon. However, there is one system for which our strategy struggles and Daikon performed far better. Furthermore, unsupervised learning techniques performed on a par when compared with Daikon in several cases.
两种不同的自动故障检测策略的经验比较:机器学习与动态分析
软件测试是确保软件质量和可靠性的常用方法,但它是一个昂贵的过程。近年来,测试用例生成的自动化作为一种降低成本的方法受到了极大的关注。然而,oracle问题(一种确定已执行测试用例正确性的机制)仍然是主要的问题,在很大程度上被忽略了。最近的研究表明,使用异常检测技术(主要是基于由输入/输出对和执行轨迹合并组成的动态执行数据的半监督/无监督学习模型)的原理构建测试oracle能够在自动检测通过和失败执行方面显示出合理的成功程度[1],[2]。在本文中,我们对基于机器学习的方法和规范挖掘领域的现有技术(数据不变量检测器Daikon[3])进行了比较研究。在一系列中型系统上对这两种方法进行了评估,并比较了它们的故障检测能力。结果表明,在大多数情况下,半监督学习技术作为自动测试分类器的表现要比Daikon好得多。然而,有一个系统,我们的战略斗争和Daikon表现得更好。此外,在一些情况下,与Daikon相比,无监督学习技术的表现相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信