关于Android随机测试的有效性:或者我如何学会停止担忧并爱上猴子

Priyam Patel, Gokul Srinivasan, Sydur Rahaman, Iulian Neamtiu
{"title":"关于Android随机测试的有效性:或者我如何学会停止担忧并爱上猴子","authors":"Priyam Patel, Gokul Srinivasan, Sydur Rahaman, Iulian Neamtiu","doi":"10.1145/3194733.3194742","DOIUrl":null,"url":null,"abstract":"Random testing of Android apps is attractive due to ease-of-use and scalability, but its effectiveness could be questioned. Prior studies have shown that Monkey – a simple approach and tool for random testing of Android apps – is surprisingly effective, \"beating\" much more sophisticated tools by achieving higher coverage. We study how Monkey's parameters affect code coverage (at class, method, block, and line levels) and set out to answer several research questions centered around improving the effectiveness of Monkey-based random testing in Android, and how it compares with manual exploration. First, we show that random stress testing via Monkey is extremely efficient (85 seconds on average) and effective at crashing apps, including 15 widely-used apps that have millions (or even billions) of installs. Second, we vary Monkey's event distribution to change app behavior and measured the resulting coverage. We found that, except for isolated cases, altering Monkey's default event distribution is unlikely to lead to higher coverage. Third, we manually explore 62 apps and compare the resulting coverages; we found that coverage achieved via manual exploration is just 2-3% higher than that achieved via Monkey exploration. Finally, our analysis shows that coarse-grained coverage is highly indicative of fine-grained coverage, hence coarse-grained coverage (which imposes low collection overhead) hits a performance vs accuracy sweet spot.","PeriodicalId":423703,"journal":{"name":"2018 IEEE/ACM 13th International Workshop on Automation of Software Test (AST)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"38","resultStr":"{\"title\":\"On the Effectiveness of Random Testing for Android: Or How I Learned to Stop Worrying and Love the Monkey\",\"authors\":\"Priyam Patel, Gokul Srinivasan, Sydur Rahaman, Iulian Neamtiu\",\"doi\":\"10.1145/3194733.3194742\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Random testing of Android apps is attractive due to ease-of-use and scalability, but its effectiveness could be questioned. Prior studies have shown that Monkey – a simple approach and tool for random testing of Android apps – is surprisingly effective, \\\"beating\\\" much more sophisticated tools by achieving higher coverage. We study how Monkey's parameters affect code coverage (at class, method, block, and line levels) and set out to answer several research questions centered around improving the effectiveness of Monkey-based random testing in Android, and how it compares with manual exploration. First, we show that random stress testing via Monkey is extremely efficient (85 seconds on average) and effective at crashing apps, including 15 widely-used apps that have millions (or even billions) of installs. Second, we vary Monkey's event distribution to change app behavior and measured the resulting coverage. We found that, except for isolated cases, altering Monkey's default event distribution is unlikely to lead to higher coverage. Third, we manually explore 62 apps and compare the resulting coverages; we found that coverage achieved via manual exploration is just 2-3% higher than that achieved via Monkey exploration. Finally, our analysis shows that coarse-grained coverage is highly indicative of fine-grained coverage, hence coarse-grained coverage (which imposes low collection overhead) hits a performance vs accuracy sweet spot.\",\"PeriodicalId\":423703,\"journal\":{\"name\":\"2018 IEEE/ACM 13th International Workshop on Automation of Software Test (AST)\",\"volume\":\"127 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"38\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE/ACM 13th International Workshop on Automation of Software Test (AST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3194733.3194742\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/ACM 13th International Workshop on Automation of Software Test (AST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3194733.3194742","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 38

摘要

Android应用的随机测试因其易用性和可扩展性而具有吸引力,但其有效性可能受到质疑。之前的研究表明,Monkey——一种用于随机测试Android应用程序的简单方法和工具——非常有效,通过获得更高的覆盖率,“击败”了更复杂的工具。我们研究了Monkey的参数如何影响代码覆盖率(在类,方法,块和行级别),并着手回答围绕提高Android中基于Monkey的随机测试的有效性的几个研究问题,以及它与手动探索的比较。首先,我们发现通过Monkey进行的随机压力测试非常有效(平均85秒)并且能够有效地让应用崩溃,包括15款拥有数百万(甚至数十亿)安装量的广泛使用应用。其次,我们改变Monkey的事件分布以改变应用行为,并测量结果覆盖率。我们发现,除了个别情况,改变Monkey的默认事件分布不太可能导致更高的覆盖率。第三,我们手动探索62个应用程序并比较结果覆盖率;我们发现,通过人工探索获得的覆盖率仅比通过Monkey探索获得的覆盖率高2-3%。最后,我们的分析表明,粗粒度覆盖高度指示细粒度覆盖,因此粗粒度覆盖(施加较低的收集开销)达到了性能与准确性的最佳点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On the Effectiveness of Random Testing for Android: Or How I Learned to Stop Worrying and Love the Monkey
Random testing of Android apps is attractive due to ease-of-use and scalability, but its effectiveness could be questioned. Prior studies have shown that Monkey – a simple approach and tool for random testing of Android apps – is surprisingly effective, "beating" much more sophisticated tools by achieving higher coverage. We study how Monkey's parameters affect code coverage (at class, method, block, and line levels) and set out to answer several research questions centered around improving the effectiveness of Monkey-based random testing in Android, and how it compares with manual exploration. First, we show that random stress testing via Monkey is extremely efficient (85 seconds on average) and effective at crashing apps, including 15 widely-used apps that have millions (or even billions) of installs. Second, we vary Monkey's event distribution to change app behavior and measured the resulting coverage. We found that, except for isolated cases, altering Monkey's default event distribution is unlikely to lead to higher coverage. Third, we manually explore 62 apps and compare the resulting coverages; we found that coverage achieved via manual exploration is just 2-3% higher than that achieved via Monkey exploration. Finally, our analysis shows that coarse-grained coverage is highly indicative of fine-grained coverage, hence coarse-grained coverage (which imposes low collection overhead) hits a performance vs accuracy sweet spot.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信