{"title":"Enabling Cost-Effective UI Automation Testing with Retrieval-Based LLMs: A Case Study in WeChat","authors":"Sidong Feng, Haochuan Lu, Jianqin Jiang, Ting Xiong, Likun Huang, Yinglin Liang, Xiaoqin Li, Yuetang Deng, Aldeida Aleti","doi":"arxiv-2409.07829","DOIUrl":null,"url":null,"abstract":"UI automation tests play a crucial role in ensuring the quality of mobile\napplications. Despite the growing popularity of machine learning techniques to\ngenerate these tests, they still face several challenges, such as the mismatch\nof UI elements. The recent advances in Large Language Models (LLMs) have\naddressed these issues by leveraging their semantic understanding capabilities.\nHowever, a significant gap remains in applying these models to industrial-level\napp testing, particularly in terms of cost optimization and knowledge\nlimitation. To address this, we introduce CAT to create cost-effective UI\nautomation tests for industry apps by combining machine learning and LLMs with\nbest practices. Given the task description, CAT employs Retrieval Augmented\nGeneration (RAG) to source examples of industrial app usage as the few-shot\nlearning context, assisting LLMs in generating the specific sequence of\nactions. CAT then employs machine learning techniques, with LLMs serving as a\ncomplementary optimizer, to map the target element on the UI screen. Our\nevaluations on the WeChat testing dataset demonstrate the CAT's performance and\ncost-effectiveness, achieving 90% UI automation with $0.34 cost, outperforming\nthe state-of-the-art. We have also integrated our approach into the real-world\nWeChat testing platform, demonstrating its usefulness in detecting 141 bugs and\nenhancing the developers' testing process.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":"10 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07829","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
UI automation tests play a crucial role in ensuring the quality of mobile
applications. Despite the growing popularity of machine learning techniques to
generate these tests, they still face several challenges, such as the mismatch
of UI elements. The recent advances in Large Language Models (LLMs) have
addressed these issues by leveraging their semantic understanding capabilities.
However, a significant gap remains in applying these models to industrial-level
app testing, particularly in terms of cost optimization and knowledge
limitation. To address this, we introduce CAT to create cost-effective UI
automation tests for industry apps by combining machine learning and LLMs with
best practices. Given the task description, CAT employs Retrieval Augmented
Generation (RAG) to source examples of industrial app usage as the few-shot
learning context, assisting LLMs in generating the specific sequence of
actions. CAT then employs machine learning techniques, with LLMs serving as a
complementary optimizer, to map the target element on the UI screen. Our
evaluations on the WeChat testing dataset demonstrate the CAT's performance and
cost-effectiveness, achieving 90% UI automation with $0.34 cost, outperforming
the state-of-the-art. We have also integrated our approach into the real-world
WeChat testing platform, demonstrating its usefulness in detecting 141 bugs and
enhancing the developers' testing process.