{"title":"Automated testing of prevalent 3D user interactions in virtual reality applications","authors":"Ruizhen Gu, José Miguel Rojas, Donghwan Shin","doi":"10.1007/s10515-026-00620-1","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Virtual Reality (VR) technologies offer immersive user experiences across various domains, but present unique testing challenges compared to traditional software. Existing VR testing approaches enable scene navigation and interaction activation, but lack the ability to automatically synthesise realistic 3D user inputs (e.g, grab and trigger actions via hand-held controllers). Automated testing that generates and executes such input remains an unresolved challenge. Furthermore, existing metrics fail to robustly capture diverse interaction coverage. This paper addresses these gaps through four key contributions. First, we empirically identify four prevalent interaction types in nine open-source VR projects: <i>fire</i>, <i>manipulate</i>, <i>socket</i>, and <i>custom</i>. Second, we introduce the <i>Interaction Flow Graph</i>, a novel abstraction that systematically models 3D user interactions by identifying targets, actions, and conditions. Third, we construct <span>XRBench3D</span>, a benchmark comprising ten VR scenes that encompass 456 distinct user interactions for evaluating VR interaction testing. Finally, we present <span>XRintTest</span>, an automated testing approach that leverages this graph for dynamic scene exploration and interaction execution. Evaluation on <span>XRBench3D</span> shows that <span>XRintTest</span> achieves great effectiveness, reaching 93% coverage of <i>fire</i>, <i>manipulate</i> and <i>socket</i> interactions across all scenes, and performing 12x more effectively and 6x more efficiently than random exploration. Moreover, <span>XRintTest</span> can detect runtime exceptions and non-exception interaction issues, including subtle configuration defects. In addition, the Interaction Flow Graph can reveal potential interaction design smells that may compromise intended functionality and hinder testing performance for VR applications.</p>\n </div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 3","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2026-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-026-00620-1.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automated Software Engineering","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10515-026-00620-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Virtual Reality (VR) technologies offer immersive user experiences across various domains, but present unique testing challenges compared to traditional software. Existing VR testing approaches enable scene navigation and interaction activation, but lack the ability to automatically synthesise realistic 3D user inputs (e.g, grab and trigger actions via hand-held controllers). Automated testing that generates and executes such input remains an unresolved challenge. Furthermore, existing metrics fail to robustly capture diverse interaction coverage. This paper addresses these gaps through four key contributions. First, we empirically identify four prevalent interaction types in nine open-source VR projects: fire, manipulate, socket, and custom. Second, we introduce the Interaction Flow Graph, a novel abstraction that systematically models 3D user interactions by identifying targets, actions, and conditions. Third, we construct XRBench3D, a benchmark comprising ten VR scenes that encompass 456 distinct user interactions for evaluating VR interaction testing. Finally, we present XRintTest, an automated testing approach that leverages this graph for dynamic scene exploration and interaction execution. Evaluation on XRBench3D shows that XRintTest achieves great effectiveness, reaching 93% coverage of fire, manipulate and socket interactions across all scenes, and performing 12x more effectively and 6x more efficiently than random exploration. Moreover, XRintTest can detect runtime exceptions and non-exception interaction issues, including subtle configuration defects. In addition, the Interaction Flow Graph can reveal potential interaction design smells that may compromise intended functionality and hinder testing performance for VR applications.
期刊介绍:
This journal details research, tutorial papers, survey and accounts of significant industrial experience in the foundations, techniques, tools and applications of automated software engineering technology. This includes the study of techniques for constructing, understanding, adapting, and modeling software artifacts and processes.
Coverage in Automated Software Engineering examines both automatic systems and collaborative systems as well as computational models of human software engineering activities. In addition, it presents knowledge representations and artificial intelligence techniques applicable to automated software engineering, and formal techniques that support or provide theoretical foundations. The journal also includes reviews of books, software, conferences and workshops.