基于强化学习的移动应用程序图形用户界面测试方法

Chuanqi Tao, Fengyu Wang, Yuemeng Gao, Hongjing Guo, Jerry Gao
{"title":"基于强化学习的移动应用程序图形用户界面测试方法","authors":"Chuanqi Tao, Fengyu Wang, Yuemeng Gao, Hongjing Guo, Jerry Gao","doi":"10.1007/s11280-024-01252-9","DOIUrl":null,"url":null,"abstract":"<p>With the popularity of mobile devices, the software market of mobile applications has been booming in recent years. Android applications occupy a vast market share. However, the applications inevitably contain defects. Defects may affect the user experience and even cause severe economic losses. This paper proposes ATAC and ATPPO, which apply reinforcement learning to Android GUI testing to mitigate the state explosion problem. The article designs a new reward function and a new state representation. It also constructs two GUI testing models (ATAC and ATPPO) based on A2C and PPO algorithms to save memory space and accelerate training speed. Empirical studies on twenty open-source applications from GitHub demonstrate that: (1) ATAC performs best in 16 of 20 apps in code coverage and defects more exceptions; (2) ATPPO can get higher code coverage in 15 of 20 apps and defects more exceptions; (3) Compared with state-of-art tools Monkey and ARES, ATAC, and ATPPO shows higher code coverage and detects more errors. ATAC and ATPPO can not only cover more code coverage but also can effectively detect more exceptions. This paper also introduces Finite-State Machine into the reinforcement learning framework to avoid falling into the local optimal state, which provides high-level guidance for further improving the test efficiency.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":"18 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A reinforcement learning-based approach to testing GUI of moblie applications\",\"authors\":\"Chuanqi Tao, Fengyu Wang, Yuemeng Gao, Hongjing Guo, Jerry Gao\",\"doi\":\"10.1007/s11280-024-01252-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With the popularity of mobile devices, the software market of mobile applications has been booming in recent years. Android applications occupy a vast market share. However, the applications inevitably contain defects. Defects may affect the user experience and even cause severe economic losses. This paper proposes ATAC and ATPPO, which apply reinforcement learning to Android GUI testing to mitigate the state explosion problem. The article designs a new reward function and a new state representation. It also constructs two GUI testing models (ATAC and ATPPO) based on A2C and PPO algorithms to save memory space and accelerate training speed. Empirical studies on twenty open-source applications from GitHub demonstrate that: (1) ATAC performs best in 16 of 20 apps in code coverage and defects more exceptions; (2) ATPPO can get higher code coverage in 15 of 20 apps and defects more exceptions; (3) Compared with state-of-art tools Monkey and ARES, ATAC, and ATPPO shows higher code coverage and detects more errors. ATAC and ATPPO can not only cover more code coverage but also can effectively detect more exceptions. This paper also introduces Finite-State Machine into the reinforcement learning framework to avoid falling into the local optimal state, which provides high-level guidance for further improving the test efficiency.</p>\",\"PeriodicalId\":501180,\"journal\":{\"name\":\"World Wide Web\",\"volume\":\"18 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"World Wide Web\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11280-024-01252-9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Wide Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11280-024-01252-9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着移动设备的普及,移动应用软件市场近年来蓬勃发展。安卓应用软件占据了巨大的市场份额。然而,这些应用程序不可避免地存在缺陷。缺陷可能会影响用户体验,甚至造成严重的经济损失。本文提出了将强化学习应用于安卓图形用户界面测试的 ATAC 和 ATPPO,以缓解状态爆炸问题。文章设计了一种新的奖励函数和一种新的状态表示法。文章还基于 A2C 和 PPO 算法构建了两个图形用户界面测试模型(ATAC 和 ATPPO),以节省内存空间并加快训练速度。对 GitHub 上二十个开源应用程序的实证研究表明(1) 在 20 个应用程序中,ATAC 在 16 个应用程序中的代码覆盖率和异常缺陷率表现最佳;(2) ATPPO 在 20 个应用程序中的 15 个应用程序中的代码覆盖率更高,异常缺陷率更高;(3) 与最先进的工具 Monkey 和 ARES 相比,ATAC 和 ATPPO 的代码覆盖率更高,检测到的错误更多。ATAC 和 ATPPO 不仅能覆盖更多的代码,还能有效地检测出更多的异常。本文还在强化学习框架中引入了有限状态机,以避免陷入局部最优状态,这为进一步提高测试效率提供了高层次的指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A reinforcement learning-based approach to testing GUI of moblie applications

A reinforcement learning-based approach to testing GUI of moblie applications

With the popularity of mobile devices, the software market of mobile applications has been booming in recent years. Android applications occupy a vast market share. However, the applications inevitably contain defects. Defects may affect the user experience and even cause severe economic losses. This paper proposes ATAC and ATPPO, which apply reinforcement learning to Android GUI testing to mitigate the state explosion problem. The article designs a new reward function and a new state representation. It also constructs two GUI testing models (ATAC and ATPPO) based on A2C and PPO algorithms to save memory space and accelerate training speed. Empirical studies on twenty open-source applications from GitHub demonstrate that: (1) ATAC performs best in 16 of 20 apps in code coverage and defects more exceptions; (2) ATPPO can get higher code coverage in 15 of 20 apps and defects more exceptions; (3) Compared with state-of-art tools Monkey and ARES, ATAC, and ATPPO shows higher code coverage and detects more errors. ATAC and ATPPO can not only cover more code coverage but also can effectively detect more exceptions. This paper also introduces Finite-State Machine into the reinforcement learning framework to avoid falling into the local optimal state, which provides high-level guidance for further improving the test efficiency.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信