Virtual RealityPub Date : 2024-03-08DOI: 10.1007/s10055-024-00948-7
Mahdiyeh Sadat Moosavi, Pierre Raimbaud, Christophe Guillet, Frédéric Mérienne
{"title":"Enhancing weight perception in virtual reality: an analysis of kinematic features","authors":"Mahdiyeh Sadat Moosavi, Pierre Raimbaud, Christophe Guillet, Frédéric Mérienne","doi":"10.1007/s10055-024-00948-7","DOIUrl":"https://doi.org/10.1007/s10055-024-00948-7","url":null,"abstract":"<p>This study investigates weight perception in virtual reality without kinesthetic feedback from the real world, by means of an illusory method called pseudo-haptic. This illusory model focuses on the dissociation of visual input and somatosensory feedback and tries to induce the sensation of virtual objects' loads in VR users by manipulating visual input. For that, modifications on the control-display ratio, i.e., between the real and virtual motions of the arm, can be used to produce a visual illusionary effect on the virtual objects' positions as well. Therefore, VR users perceive it as velocity variations in the objects' displacements, helping them achieve a better sensation of virtual weight. A primary contribution of this paper is the development of a novel, holistic assessment methodology that measures the sense of the presence in virtual reality contexts, particularly when participants are lifting virtual objects and experiencing their weight. Our study examined the effect of virtual object weight on the kinematic parameters and velocity profiles of participants' upward arm motions, along with a parallel experiment conducted using real weights. By comparing the lifting of real objects with that of virtual objects, it was possible to gain insight into the variations in kinematic features observed in participants' arm motions. Additionally, subjective measurements, utilizing the Borg CR10 questionnaire, were conducted to assess participants' perceptions of hand fatigue. The analysis of collected data, encompassing both subjective and objective measurements, concluded that participants experienced similar sensations of fatigue and changes in hand kinematics during both virtual object tasks, resulting from pseudo-haptic feedback, and real weight lifting tasks. This consistency in findings underscores the efficacy of pseudo-haptic feedback in simulating realistic weight sensations in virtual environments.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual RealityPub Date : 2024-03-08DOI: 10.1007/s10055-024-00956-7
Triton Ong, Julia Ivanova, Hiral Soni, Hattie Wilczewski, Janelle Barrera, Mollie Cummins, Brandon M. Welch, Brian E. Bunnell
{"title":"Therapist perspectives on telehealth-based virtual reality exposure therapy","authors":"Triton Ong, Julia Ivanova, Hiral Soni, Hattie Wilczewski, Janelle Barrera, Mollie Cummins, Brandon M. Welch, Brian E. Bunnell","doi":"10.1007/s10055-024-00956-7","DOIUrl":"https://doi.org/10.1007/s10055-024-00956-7","url":null,"abstract":"<p>Virtual reality (VR) can enhance mental health care. In particular, the effectiveness of VR-based exposure therapy (VRET) has been well-demonstrated for treatment of anxiety disorders. However, most applications of VRET remain localized to clinic spaces. We aimed to explore mental health therapists’ perceptions of telehealth-based VRET (tele-VRET) by conducting semi-structured, qualitative interviews with 18 telemental health therapists between October and December 2022. Interview topics included telehealth experiences, exposure therapy over telehealth, previous experiences with VR, and perspectives on tele-VRET. Therapists described how telehealth reduced barriers (88.9%, 16/18), enhanced therapy (61.1%, 11/18), and improved access to clients (38.9%, 7/18), but entailed problems with technology (61.1%, 11/18), uncontrolled settings (55.6%, 10/18), and communication difficulties (50%, 9/18). Therapists adapted exposure therapy to telehealth by using online resources (66.7%, 12/18), preparing client expectations (55.6%, 10/18), and adjusting workflows (27.8%, 5/18). Most therapists had used VR before (72.2%, 13/18) and had positive impressions of VR (55.6%, 10/18), but none had used VR clinically. In response to tele-VRET, therapists requested interactive session activities (77.8%, 14/18) and customizable interventions components (55.6%, 10/18). Concerns about tele-VRET included risks with certain clients (77.8%, 14/18), costs (50%, 9/18), side effects and privacy (22.2%, 4/18), and inappropriateness for specific forms of exposure therapy (16.7%, 3/18). These results reveal how combining telehealth and VRET may expand therapeutic options for mental healthcare providers and can help inform collaborative development of immersive health technologies.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual RealityPub Date : 2024-03-08DOI: 10.1007/s10055-024-00970-9
Henar Guillen-Sanz, David Checa, Ines Miguel-Alonso, Andres Bustillo
{"title":"A systematic review of wearable biosensor usage in immersive virtual reality experiences","authors":"Henar Guillen-Sanz, David Checa, Ines Miguel-Alonso, Andres Bustillo","doi":"10.1007/s10055-024-00970-9","DOIUrl":"https://doi.org/10.1007/s10055-024-00970-9","url":null,"abstract":"<p>Wearable biosensors are increasingly incorporated in immersive Virtual Reality (iVR) applications. A trend that is attributed to the availability of better quality, less costly, and easier-to-use devices. However, consensus is yet to emerge over the most optimal combinations. In this review, the aim is to clarify the best examples of biosensor usage in combination with iVR applications. The high number of papers in the review (560) were classified into the following seven fields of application: psychology, medicine, sports, education, ergonomics, military, and tourism and marketing. The use of each type of wearable biosensor and Head-Mounted Display was analyzed for each field of application. Then, the development of the iVR application is analyzed according to its goals, user interaction levels, and the possibility of adapting the iVR environment to biosensor feedback. Finally, the evaluation of the iVR experience was studied, considering such issues as sample size, the presence of a control group, and post-assessment routines. A working method through which the most common solutions, the best practices, and the most promising trends in biofeedback-based iVR applications were identified for each field of application. Besides, guidelines oriented towards good practice are proposed for the development of future iVR with biofeedback applications. The results of this review suggest that the use of biosensors within iVR environments need to be standardized in some fields of application, especially when considering the adaptation of the iVR experience to real-time biosignals to improve user performance.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual RealityPub Date : 2024-03-08DOI: 10.1007/s10055-024-00942-z
Qing Gong, Ning Zou, Wenjing Yang, Qi Zheng, Pengrui Chen
{"title":"User experience model and design strategies for virtual reality-based cultural heritage exhibition","authors":"Qing Gong, Ning Zou, Wenjing Yang, Qi Zheng, Pengrui Chen","doi":"10.1007/s10055-024-00942-z","DOIUrl":"https://doi.org/10.1007/s10055-024-00942-z","url":null,"abstract":"<p>A virtual reality (VR) based cultural heritage exhibition (VRCHE) is an important type of VR-based museum exhibition. The user experience (UX) design of VRCHE has encountered opportunities and due to the differences in human–computer interaction between VR-based and conventional interaction interfaces, so proposing the UX model of VRCHE is crucial. Although there are some existing works that study the UX models of VRCHEs, they are not complete enough to describe the UX of VRCHEs or offer any design strategies due to the methodologies and experimental materials that they currently use. This study creates experiments utilizing grounded theory that combine qualitative and quantitative approaches. Then, the study synthesizes three-level coding and quantitative analysis findings from grounded theory, builds a detailed model of the VRCHE UX using theoretical coding, and proposes design strategies.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VR Designer: enhancing fashion showcases through immersive virtual garment fitting","authors":"Orestis Sarakatsanos, Anastasios Papazoglou-Chalikias, Machi Boikou, Elisavet Chatzilari, Michaela Jauk, Ursina Hafliger, Spiros Nikolopoulos, Ioannis Kompatsiaris","doi":"10.1007/s10055-024-00945-w","DOIUrl":"https://doi.org/10.1007/s10055-024-00945-w","url":null,"abstract":"<p>This paper introduces a Virtual Reality (VR) application tailored for fashion designers and retailers, transcending traditional garment design and demonstration boundaries by presenting an immersive digital garment showcase within a captivating VR environment. Simulating a virtual retail store, designers navigate freely, selecting from an array of avatar-garment combinations and exploring garments from diverse perspectives. This immersive experience offers designers a precise representation of the final product’s aesthetics, fit, and functionality on the human body. Our application can be considered as a pre-manufacturing layer, that empowers designers and retailers with a precise understanding of how the actual garment will look and behave. Evaluation involved comprehensive feedback from both professional and undergraduate fashion designers, gathered through usability testing sessions.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual RealityPub Date : 2024-03-08DOI: 10.1007/s10055-024-00947-8
José L. Gómez-Sirvent, Alicia Fernández-Sotos, Antonio Fernández-Caballero, Desirée Fernández-Sotos
{"title":"Assessment of music performance anxiety in a virtual auditorium through the study of ambient lighting and audience distance","authors":"José L. Gómez-Sirvent, Alicia Fernández-Sotos, Antonio Fernández-Caballero, Desirée Fernández-Sotos","doi":"10.1007/s10055-024-00947-8","DOIUrl":"https://doi.org/10.1007/s10055-024-00947-8","url":null,"abstract":"<p>Performance anxiety is a common problem affecting musicians’ concentration and well-being. Musicians frequently encounter greater challenges and emotional discomfort when performing in front of an audience. Recent research suggests an important relationship between the characteristics of the built environment and people’s well-being. In this study, we explore modifying the built environment to create spaces where musicians are less aware of the presence of the audience and can express themselves more comfortably. An experiment was conducted with 61 conservatory musicians playing their instrument in a virtual auditorium in front of an audience of hundreds of virtual humans. They performed at different distances from the audience and under different levels of ambient lighting, while their eye movements were recorded. These data, together with questionnaires, were used to analyse the way the environment is perceived. The results showed that reducing the light intensity above the audience made the view of the auditorium more calming, and the same effect was observed when the distance between the audience and the musician was increased. Eye-tracking data showed a significant reduction in saccadic eye movements as the distance from the audience increased. This work provides a novel approach to architecture influence on musicians’ experience during solo performances. The findings are useful to designers and researchers.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140076725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual RealityPub Date : 2024-03-06DOI: 10.1007/s10055-024-00960-x
Laura Pérez-Pachón, Parivrudh Sharma, Helena Brech, Jenny Gregory, Terry Lowe, Matthieu Poyade, Flora Gröning
{"title":"Augmented reality headsets for surgical guidance: the impact of holographic model positions on user localisation accuracy","authors":"Laura Pérez-Pachón, Parivrudh Sharma, Helena Brech, Jenny Gregory, Terry Lowe, Matthieu Poyade, Flora Gröning","doi":"10.1007/s10055-024-00960-x","DOIUrl":"https://doi.org/10.1007/s10055-024-00960-x","url":null,"abstract":"<p>Novel augmented reality headsets such as HoloLens can be used to overlay patient-specific virtual models of resection margins on the patient’s skin, providing surgeons with information not normally available in the operating room. For this to be useful, surgeons wearing the headset must be able to localise virtual models accurately. We measured the error with which users localise virtual models at different positions and distances from their eyes. Healthy volunteers aged 20–59 years (<i>n</i> = 54) performed 81 exercises involving the localisation of a virtual hexagon’s vertices overlaid on a monitor surface. Nine predefined positions and three distances between the virtual hexagon and the users’ eyes (65, 85 and 105 cm) were set. We found that, some model positions and the shortest distance (65 cm) led to larger localisation errors than other positions and larger distances (85 and 105 cm). Positional errors of more than 5 mm and 1–5 mm margin errors were found in 29.8% and over 40% of cases, respectively. Strong outliers were also found (e.g. margin shrinkage of up to 17.4 mm in 4.3% of cases). The measured errors may result in poor outcomes of surgeries: e.g. incomplete tumour excision or inaccurate flap design, which can potentially lead to tumour recurrence and flap failure, respectively. Reducing localisation errors associated with arm reach distances between the virtual models and users’ eyes is necessary for augmented reality headsets to be suitable for surgical purposes. In addition, training surgeons on the use of these headsets may help to minimise localisation errors.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140055562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual RealityPub Date : 2024-03-06DOI: 10.1007/s10055-024-00953-w
Mariano Banquiero, Gracia Valdeolivas, David Ramón, M.-Carmen Juan
{"title":"A color Passthrough mixed reality application for learning piano","authors":"Mariano Banquiero, Gracia Valdeolivas, David Ramón, M.-Carmen Juan","doi":"10.1007/s10055-024-00953-w","DOIUrl":"https://doi.org/10.1007/s10055-024-00953-w","url":null,"abstract":"<p>This work presents the development of a mixed reality (MR) application that uses color Passthrough for learning to play the piano. A study was carried out to compare the interpretation outcomes of the participants and their subjective experience when using the MR application developed to learn to play the piano with a system that used Synthesia (<i>N</i> = 33). The results show that the MR application and Synthesia were effective in learning piano. However, the students played the pieces significantly better when using the MR application. The two applications both provided a satisfying user experience. However, the subjective experience of the students was better when they used the MR application. Other conclusions derived from the study include the following: (1) The outcomes of the students and their subjective opinion about the experience when using the MR application were independent of age and gender; (2) the sense of presence offered by the MR application was high (above 6 on a scale of 1 to 7); (3) the adverse effects induced by wearing the Meta Quest Pro and using our MR application were negligible; and (4) the students showed their preference for the MR application. As a conclusion, the advantage of our MR application compared to other types of applications (e.g., non-projected piano roll notation) is that the user has a direct view of the piano and the help elements appear integrated in the user’s view. The user does not have to take their eyes off the keyboard and is focused on playing the piano.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140055823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual RealityPub Date : 2024-03-05DOI: 10.1007/s10055-024-00965-6
Hector Tovanche-Picon, Javier González-Trejo, Ángel Flores-Abad, Miguel Ángel García-Terán, Diego Mercado-Ravell
{"title":"Real-time safe validation of autonomous landing in populated areas: from virtual environments to Robot-In-The-Loop","authors":"Hector Tovanche-Picon, Javier González-Trejo, Ángel Flores-Abad, Miguel Ángel García-Terán, Diego Mercado-Ravell","doi":"10.1007/s10055-024-00965-6","DOIUrl":"https://doi.org/10.1007/s10055-024-00965-6","url":null,"abstract":"<p>Safe autonomous landing for Unmanned Aerial Vehicles (UAVs) in populated areas is a crucial aspect for successful integration of UAVs in populated environments. Nonetheless, validating autonomous landing in real scenarios is a challenging task with a high risk of injuring people. In this work, we propose a framework for safe real-time and thorough evaluation of vision-based autonomous landing in populated scenarios, using photo-realistic virtual environments and physics-based simulation. The proposed evaluation pipeline includes the use of Unreal graphics engine coupled with AirSim for realistic drone simulation to evaluate landing strategies. Then, Software-/Hardware-In-The-Loop can be used to test beforehand the performance of the algorithms. The final validation stage consists in a Robot-In-The-Loop evaluation strategy where a real drone must perform autonomous landing maneuvers in real-time, with an avatar drone in a virtual environment mimicking its behavior, while the detection algorithms run in the virtual environment (virtual reality to the robot). This method determines the safe landing areas based on computer vision and convolutional neural networks to avoid colliding with people in static and dynamic scenarios. To test the robustness of the algorithms in adversary conditions, different urban-like environments were implemented, including moving agents and different weather conditions. We also propose different metrics to quantify the performance of the landing strategies, establishing a baseline for comparison with future works on this challenging task, and analyze them through several randomized iterations. The proposed approach allowed us to safely validate the autonomous landing strategies, providing an evaluation pipeline, and a benchmark for comparison. An extensive evaluation showed a 99% success rate in static scenarios and 87% in dynamic cases, demonstrating that the use of autonomous landing algorithms considerably prevents accidents involving humans, facilitating the integration of drones in human-populated spaces, which may help to unleash the full potential of drones in urban environments. Besides, this type of development helps to increase the safety of drone operations, which would advance drone flight regulations and allow their use in closer proximity to humans.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140036374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual RealityPub Date : 2024-03-05DOI: 10.1007/s10055-024-00962-9
Jesus Mayor, Pablo Calleja, Felix Fuentes-Hurtado
{"title":"Long short-term memory prediction of user’s locomotion in virtual reality","authors":"Jesus Mayor, Pablo Calleja, Felix Fuentes-Hurtado","doi":"10.1007/s10055-024-00962-9","DOIUrl":"https://doi.org/10.1007/s10055-024-00962-9","url":null,"abstract":"<p>Nowadays, there is still a challenge in virtual reality to obtain an accurate displacement prediction of the user. This could be a future key element to apply in the so-called redirected walking methods. Meanwhile, deep learning provides us with new tools to reach greater achievements in this type of prediction. Specifically, long short-term memory recurrent neural networks obtained promising results recently. This gives us clues to continue researching in this line to predict virtual reality user’s displacement. This manuscript focuses on the collection of positional data and a subsequent new way to train a deep learning model to obtain more accurate predictions. The data were collected with 44 participants and it has been analyzed with different existing prediction algorithms. The best results were obtained with a new idea, the use of rotation quaternions and the three dimensions to train the previously existing models. The authors strongly believe that there is still much room for improvement in this research area by means of the usage of new deep learning models.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140036289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}