On Using Decision Tree Coverage Criteria forTesting Machine Learning Models

Sebastião Santos, B. Silveira, Vinicius H. S. Durelli, R. Durelli, S. Souza, M. Delamaro
{"title":"On Using Decision Tree Coverage Criteria forTesting Machine Learning Models","authors":"Sebastião Santos, B. Silveira, Vinicius H. S. Durelli, R. Durelli, S. Souza, M. Delamaro","doi":"10.1145/3482909.3482911","DOIUrl":null,"url":null,"abstract":"Over the past decade, there has been a growing interest in applying machine learning (ML) to address a myriad of tasks. Owing to this interest, the adoption of ML-based systems has gone mainstream. However, this widespread adoption of ML-based systems poses new challenges for software testers that must improve the quality and reliability of these ML-based solutions. To cope with the challenges of testing ML-based systems, we propose novel test adequacy criteria based on decision tree models. Differently from the traditional approach to testing ML models, which relies on manual collection and labelling of data, our criteria leverage the internal structure of decision tree models to guide the selection of test inputs. Thus, we introduce decision tree coverage (DTC) and boundary value analysis (BVA) as approaches to systematically guide the creation of effective test data that exercises key structural elements of a given decision tree model. To evaluate these criteria, we carried out an experiment using 12 datasets. We measured the effectiveness of test inputs in terms of the difference in model’s behavior between the test input and the training data. The experiment results indicate that our testing criteria can be used to guide the generation of effective test data.","PeriodicalId":355243,"journal":{"name":"Proceedings of the 6th Brazilian Symposium on Systematic and Automated Software Testing","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th Brazilian Symposium on Systematic and Automated Software Testing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3482909.3482911","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Over the past decade, there has been a growing interest in applying machine learning (ML) to address a myriad of tasks. Owing to this interest, the adoption of ML-based systems has gone mainstream. However, this widespread adoption of ML-based systems poses new challenges for software testers that must improve the quality and reliability of these ML-based solutions. To cope with the challenges of testing ML-based systems, we propose novel test adequacy criteria based on decision tree models. Differently from the traditional approach to testing ML models, which relies on manual collection and labelling of data, our criteria leverage the internal structure of decision tree models to guide the selection of test inputs. Thus, we introduce decision tree coverage (DTC) and boundary value analysis (BVA) as approaches to systematically guide the creation of effective test data that exercises key structural elements of a given decision tree model. To evaluate these criteria, we carried out an experiment using 12 datasets. We measured the effectiveness of test inputs in terms of the difference in model’s behavior between the test input and the training data. The experiment results indicate that our testing criteria can be used to guide the generation of effective test data.
用决策树覆盖标准测试机器学习模型
在过去的十年中,人们对应用机器学习(ML)来解决无数任务的兴趣越来越大。由于这种兴趣,采用基于ml的系统已经成为主流。然而,基于ml的系统的广泛采用给软件测试人员带来了新的挑战,他们必须提高这些基于ml的解决方案的质量和可靠性。为了应对测试基于ml的系统的挑战,我们提出了基于决策树模型的新的测试充分性标准。与传统的机器学习模型测试方法(依赖于手动收集和标记数据)不同,我们的标准利用决策树模型的内部结构来指导测试输入的选择。因此,我们引入决策树覆盖(DTC)和边界值分析(BVA)作为系统地指导创建有效测试数据的方法,这些数据可以练习给定决策树模型的关键结构元素。为了评估这些标准,我们使用12个数据集进行了实验。我们根据测试输入和训练数据之间模型行为的差异来衡量测试输入的有效性。实验结果表明,该测试准则可用于指导有效测试数据的生成。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信