{"title":"A Survey of the Use of Test Report in Crowdsourced Testing","authors":"Song Huang, Hao Chen, Zhan-wei Hui, Yuchan Liu","doi":"10.1109/QRS51102.2020.00062","DOIUrl":null,"url":null,"abstract":"With the rise of crowdsourced software testing in recent years, the issuers of crowd test tasks can usually collect a large number of test reports after the end of the task. These reports have insufficient validity and completeness, and manual review often takes a lot of time and effort. The crowdsourced test task publisher hopes that after the crowdsourced platform collects the test report, it can analyze the validity and completeness of the report to determine the severity of the report and improve the efficiency of crowdsourced software testing. In the past ten years, researchers have used various technologies (such as natural language processing, information retrieval, machine learning, deep learning) to assist in analyzing reports to improve the efficiency of report review. We have summarized the relevant literature of report analysis in the past ten years, and then classified from report classification, duplicate report detection, report prioritization, report refactoring, and summarized the most important research work in each area. Finally, we propose research trends in these areas and analyze the challenges and opportunities facing crowdsourced test report analysis.","PeriodicalId":301814,"journal":{"name":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 20th International Conference on Software Quality, Reliability and Security (QRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QRS51102.2020.00062","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
With the rise of crowdsourced software testing in recent years, the issuers of crowd test tasks can usually collect a large number of test reports after the end of the task. These reports have insufficient validity and completeness, and manual review often takes a lot of time and effort. The crowdsourced test task publisher hopes that after the crowdsourced platform collects the test report, it can analyze the validity and completeness of the report to determine the severity of the report and improve the efficiency of crowdsourced software testing. In the past ten years, researchers have used various technologies (such as natural language processing, information retrieval, machine learning, deep learning) to assist in analyzing reports to improve the efficiency of report review. We have summarized the relevant literature of report analysis in the past ten years, and then classified from report classification, duplicate report detection, report prioritization, report refactoring, and summarized the most important research work in each area. Finally, we propose research trends in these areas and analyze the challenges and opportunities facing crowdsourced test report analysis.