{"title":"An empirical investigation into the capabilities of anomaly detection approaches for test smell detection","authors":"Valeria Pontillo , Luana Martins , Ivan Machado , Fabio Palomba , Filomena Ferrucci","doi":"10.1016/j.jss.2024.112320","DOIUrl":null,"url":null,"abstract":"<div><div>Test smells are symptoms of sub-optimal design choices adopted when developing test cases. Previous research has demonstrated their harmfulness for test code maintainability and effectiveness, showing their impact on test code quality. As such, the quality of test cases affected by test smells is likely to deviate significantly from the quality of test cases not affected by any smell and might be classified as <em>anomalies</em>. In this paper, we challenge this observation by experimenting with three anomaly detection approaches based on machine learning, cluster analysis, and statistics to understand their effectiveness for the detection of four test smells, i.e., <em>Eager Test</em>, <em>Mystery Guest</em>, <em>Resource Optimism</em>, and <em>Test Redundancy</em> on 66 open-source <span>Java</span> projects. In addition, we compare our results with state-of-the-art heuristic-based and machine learning-based baselines. Our ultimate goal is not to prove that anomaly detection methods are better than existing approaches, but to objectively assess their effectiveness in this domain. The key findings of the study show that the <em>F-Measure</em> of anomaly detectors never exceeds 47%, obtained in the <em>Eager Test</em> detection using the statistical approach, while the <em>Recall</em> is generally higher for the statistical and clustering approaches. Nevertheless, the anomaly detection approaches have a higher <em>Recall</em> than the heuristic and machine learning-based techniques for all test smells. The low <em>F-Measure</em> values we observed for anomaly detectors provide valuable insights into the current limitations of anomaly detection in this context. We conclude our study by elaborating on and discussing the reasons behind these negative results through qualitative investigations. Our analysis shows that the detection of test smells could depend on the approach exploited, suggesting the feasibility of developing a meta-approach.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"222 ","pages":"Article 112320"},"PeriodicalIF":3.7000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121224003649","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Test smells are symptoms of sub-optimal design choices adopted when developing test cases. Previous research has demonstrated their harmfulness for test code maintainability and effectiveness, showing their impact on test code quality. As such, the quality of test cases affected by test smells is likely to deviate significantly from the quality of test cases not affected by any smell and might be classified as anomalies. In this paper, we challenge this observation by experimenting with three anomaly detection approaches based on machine learning, cluster analysis, and statistics to understand their effectiveness for the detection of four test smells, i.e., Eager Test, Mystery Guest, Resource Optimism, and Test Redundancy on 66 open-source Java projects. In addition, we compare our results with state-of-the-art heuristic-based and machine learning-based baselines. Our ultimate goal is not to prove that anomaly detection methods are better than existing approaches, but to objectively assess their effectiveness in this domain. The key findings of the study show that the F-Measure of anomaly detectors never exceeds 47%, obtained in the Eager Test detection using the statistical approach, while the Recall is generally higher for the statistical and clustering approaches. Nevertheless, the anomaly detection approaches have a higher Recall than the heuristic and machine learning-based techniques for all test smells. The low F-Measure values we observed for anomaly detectors provide valuable insights into the current limitations of anomaly detection in this context. We conclude our study by elaborating on and discussing the reasons behind these negative results through qualitative investigations. Our analysis shows that the detection of test smells could depend on the approach exploited, suggesting the feasibility of developing a meta-approach.
期刊介绍:
The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to:
•Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution
•Agile, model-driven, service-oriented, open source and global software development
•Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems
•Human factors and management concerns of software development
•Data management and big data issues of software systems
•Metrics and evaluation, data mining of software development resources
•Business and economic aspects of software development processes
The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.