Machine vision-based testing action recognition method for robotic testing of mobile application

IF 1.9 4区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Tao Zhang, Zhengqi Su, Jing Cheng, Feng Xue, Shengyu Liu
{"title":"Machine vision-based testing action recognition method for robotic testing of mobile application","authors":"Tao Zhang, Zhengqi Su, Jing Cheng, Feng Xue, Shengyu Liu","doi":"10.1177/15501329221115375","DOIUrl":null,"url":null,"abstract":"The explosive growth and rapid version iteration of various mobile applications have brought enormous workloads to mobile application testing. Robotic testing methods can efficiently handle repetitive testing tasks, which can compensate for the accuracy of manual testing and improve the efficiency of testing work. Vision-based robotic testing identifies the types of test actions by analyzing expert test videos and generates expert imitation test cases. The mobile application expert imitation testing method uses machine learning algorithms to analyze the behavior of experts imitating test videos, generates test cases with high reliability and reusability, and drives robots to execute test cases. However, the difficulty of estimating multi-dimensional gestures in 2D images leads to complex algorithm steps, including tracking, detection, and recognition of dynamic gestures. Hence, this article focuses on the analysis and recognition of test actions in mobile application robot testing. Combined with the improved YOLOv5 algorithm and the ResNet-152 algorithm, a visual modeling method of mobile application test action based on machine vision is proposed. The precise localization of the hand is accomplished by injecting dynamic anchors, attention mechanism, and the weighted boxes fusion in the YOLOv5 algorithm. The improved algorithm recognition accuracy increased from 82.6% to 94.8%. By introducing the pyramid context awareness mechanism into the ResNet-152 algorithm, the accuracy of test action classification is improved. The accuracy of the test action classification was improved from 72.57% to 76.84%. Experiments show that this method can reduce the probability of multiple detections and missed detection of test actions, and improve the accuracy of test action recognition.","PeriodicalId":50327,"journal":{"name":"International Journal of Distributed Sensor Networks","volume":" ","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Distributed Sensor Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1177/15501329221115375","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 4

Abstract

The explosive growth and rapid version iteration of various mobile applications have brought enormous workloads to mobile application testing. Robotic testing methods can efficiently handle repetitive testing tasks, which can compensate for the accuracy of manual testing and improve the efficiency of testing work. Vision-based robotic testing identifies the types of test actions by analyzing expert test videos and generates expert imitation test cases. The mobile application expert imitation testing method uses machine learning algorithms to analyze the behavior of experts imitating test videos, generates test cases with high reliability and reusability, and drives robots to execute test cases. However, the difficulty of estimating multi-dimensional gestures in 2D images leads to complex algorithm steps, including tracking, detection, and recognition of dynamic gestures. Hence, this article focuses on the analysis and recognition of test actions in mobile application robot testing. Combined with the improved YOLOv5 algorithm and the ResNet-152 algorithm, a visual modeling method of mobile application test action based on machine vision is proposed. The precise localization of the hand is accomplished by injecting dynamic anchors, attention mechanism, and the weighted boxes fusion in the YOLOv5 algorithm. The improved algorithm recognition accuracy increased from 82.6% to 94.8%. By introducing the pyramid context awareness mechanism into the ResNet-152 algorithm, the accuracy of test action classification is improved. The accuracy of the test action classification was improved from 72.57% to 76.84%. Experiments show that this method can reduce the probability of multiple detections and missed detection of test actions, and improve the accuracy of test action recognition.
基于机器视觉的机器人移动测试动作识别方法
各种移动应用程序的爆炸式增长和快速版本迭代给移动应用程序测试带来了巨大的工作量。机器人测试方法可以有效地处理重复性测试任务,弥补人工测试的准确性,提高测试工作的效率。基于视觉的机器人测试通过分析专家测试视频来识别测试动作的类型,并生成专家模拟测试用例。移动应用专家模仿测试方法利用机器学习算法分析专家模仿测试视频的行为,生成高可靠性和可重用性的测试用例,驱动机器人执行测试用例。然而,在二维图像中估计多维手势的难度导致了复杂的算法步骤,包括动态手势的跟踪、检测和识别。因此,本文主要研究移动应用机器人测试中测试动作的分析与识别。结合改进的YOLOv5算法和ResNet-152算法,提出了一种基于机器视觉的移动应用测试动作可视化建模方法。YOLOv5算法通过注入动态锚点、注意机制和加权盒融合实现手部的精确定位。改进后的算法识别准确率由82.6%提高到94.8%。通过在ResNet-152算法中引入金字塔上下文感知机制,提高了测试动作分类的准确率。测试动作分类准确率由72.57%提高到76.84%。实验表明,该方法可以降低测试动作的多次检测和漏检概率,提高测试动作识别的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.50
自引率
4.30%
发文量
94
审稿时长
3.6 months
期刊介绍: International Journal of Distributed Sensor Networks (IJDSN) is a JCR ranked, peer-reviewed, open access journal that focuses on applied research and applications of sensor networks. The goal of this journal is to provide a forum for the publication of important research contributions in developing high performance computing solutions to problems arising from the complexities of these sensor network systems. Articles highlight advances in uses of sensor network systems for solving computational tasks in manufacturing, engineering and environmental systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信