Android应用类人游戏测试的轻量级方法

Yan Zhao, Enyi Tang, Haipeng Cai, Xi Guo, Xiaoyin Wang, Na Meng
{"title":"Android应用类人游戏测试的轻量级方法","authors":"Yan Zhao, Enyi Tang, Haipeng Cai, Xi Guo, Xiaoyin Wang, Na Meng","doi":"10.1109/saner53432.2022.00047","DOIUrl":null,"url":null,"abstract":"A play test is the process in which testers play video games for software quality assurance. Manual testing is expensive and time-consuming, especially when there are many mobile games to test and every game version requires extensive testing. Current testing frameworks (e.g., Android Monkey) are limited as they adopt no domain knowledge to play games. Learning-based tools (e.g., Wuji) require tremendous manual effort and ML expertise of developers. This paper presents LIT-a lightweight approach to generalize play test tactics from manual testing, and to adopt the tactics for automatic testing. Lit has two phases: tactic generalization and tactic concretization. In Phase I, when a human tester plays an Android game $G$ for a while (e.g., eight minutes), Lit records the tester's inputs and related scenes. Based on the collected data, Lit infers a set of context-aware, abstract play test tactics that describe under what circumstances, what actions can be taken. In Phase II, LIttests $G$ based on the generalized tactics. Namely, given a randomly generated game scene, Lit tentatively matches that scene with the abstract context of any inferred tactic; if the match succeeds, Lit customizes the tactic to generate an action for playtest. Our evaluation with nine games shows Lit to outperform two state-of-the-art tools and a reinforcement learning (RL)-based tool, by covering more code and triggering more errors. Lit complements existing tools and helps developers test various casual games (e.g., match3, shooting, and puzzles).","PeriodicalId":437520,"journal":{"name":"2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Lightweight Approach of Human-Like Playtest for Android Apps\",\"authors\":\"Yan Zhao, Enyi Tang, Haipeng Cai, Xi Guo, Xiaoyin Wang, Na Meng\",\"doi\":\"10.1109/saner53432.2022.00047\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A play test is the process in which testers play video games for software quality assurance. Manual testing is expensive and time-consuming, especially when there are many mobile games to test and every game version requires extensive testing. Current testing frameworks (e.g., Android Monkey) are limited as they adopt no domain knowledge to play games. Learning-based tools (e.g., Wuji) require tremendous manual effort and ML expertise of developers. This paper presents LIT-a lightweight approach to generalize play test tactics from manual testing, and to adopt the tactics for automatic testing. Lit has two phases: tactic generalization and tactic concretization. In Phase I, when a human tester plays an Android game $G$ for a while (e.g., eight minutes), Lit records the tester's inputs and related scenes. Based on the collected data, Lit infers a set of context-aware, abstract play test tactics that describe under what circumstances, what actions can be taken. In Phase II, LIttests $G$ based on the generalized tactics. Namely, given a randomly generated game scene, Lit tentatively matches that scene with the abstract context of any inferred tactic; if the match succeeds, Lit customizes the tactic to generate an action for playtest. Our evaluation with nine games shows Lit to outperform two state-of-the-art tools and a reinforcement learning (RL)-based tool, by covering more code and triggering more errors. Lit complements existing tools and helps developers test various casual games (e.g., match3, shooting, and puzzles).\",\"PeriodicalId\":437520,\"journal\":{\"name\":\"2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/saner53432.2022.00047\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/saner53432.2022.00047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

游戏测试是测试人员为了保证软件质量而玩电子游戏的过程。手动测试既昂贵又耗时,特别是当有许多手机游戏需要测试,每个游戏版本都需要大量测试时。当前的测试框架(如Android Monkey)是有限的,因为它们没有采用任何领域知识来玩游戏。基于学习的工具(例如Wuji)需要大量的手工工作和开发人员的ML专业知识。本文提出了一种轻量级的方法来概括手工测试的游戏测试策略,并将其用于自动测试。文学有两个阶段:策略概括和策略具体化。在第一阶段,当人类测试者玩一款Android游戏一段时间(如8分钟),Lit会记录测试者的输入和相关场景。根据收集到的数据,Lit推断出一套情境感知、抽象的游戏测试策略,描述在什么情况下可以采取什么行动。在第二阶段,litter基于一般化策略测试$G$。也就是说,给定一个随机生成的游戏场景,Lit会尝试将该场景与任何推断策略的抽象上下文相匹配;如果比赛成功,Lit将定制策略以生成游戏测试的动作。我们对9个游戏的评估表明,通过覆盖更多代码和触发更多错误,Lit优于两种最先进的工具和一种基于强化学习(RL)的工具。Lit是现有工具的补充,可以帮助开发者测试各种休闲游戏(如三消、射击和谜题)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Lightweight Approach of Human-Like Playtest for Android Apps
A play test is the process in which testers play video games for software quality assurance. Manual testing is expensive and time-consuming, especially when there are many mobile games to test and every game version requires extensive testing. Current testing frameworks (e.g., Android Monkey) are limited as they adopt no domain knowledge to play games. Learning-based tools (e.g., Wuji) require tremendous manual effort and ML expertise of developers. This paper presents LIT-a lightweight approach to generalize play test tactics from manual testing, and to adopt the tactics for automatic testing. Lit has two phases: tactic generalization and tactic concretization. In Phase I, when a human tester plays an Android game $G$ for a while (e.g., eight minutes), Lit records the tester's inputs and related scenes. Based on the collected data, Lit infers a set of context-aware, abstract play test tactics that describe under what circumstances, what actions can be taken. In Phase II, LIttests $G$ based on the generalized tactics. Namely, given a randomly generated game scene, Lit tentatively matches that scene with the abstract context of any inferred tactic; if the match succeeds, Lit customizes the tactic to generate an action for playtest. Our evaluation with nine games shows Lit to outperform two state-of-the-art tools and a reinforcement learning (RL)-based tool, by covering more code and triggering more errors. Lit complements existing tools and helps developers test various casual games (e.g., match3, shooting, and puzzles).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信