{"title":"Deepdive: a learning-based approach for virtual camera in immersive contents","authors":"Muhammad Irfan , Muhammad Munsif","doi":"10.1016/j.vrih.2022.05.001","DOIUrl":"10.1016/j.vrih.2022.05.001","url":null,"abstract":"<div><p>A 360° video stream provide users a choice of viewing one's own point of interest inside the immersive contents. Performing head or hand manipulations to view the interesting scene in a 360° video is very tedious and the user may view the interested frame during his head/hand movement or even lose it. While automatically extracting user's point of interest (UPI) in a 360° video is very challenging because of subjectivity and difference of comforts. To handle these challenges and provide user's the best and visually pleasant view, we propose an automatic approach by utilizing two CNN models: object detector and aesthetic score of the scene. The proposed framework is three folded: pre-processing, Deepdive architecture, and view selection pipeline. In first fold, an input 360° video-frame is divided into three subframes, each one with 120° view. In second fold, each sub-frame is passed through CNN models to extract visual features in the sub-frames and calculate aesthetic score. Finally, decision pipeline selects the subframe with salient object based on the detected object and calculated aesthetic score. As compared to other state-of-the-art techniques which are domain specific approaches i.e., support sports 360° video, our system support most of the 360° videos genre. Performance evaluation of proposed framework on our own collected data from various websites indicate performance for different categories of 360° videos.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 247-262"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000420/pdf?md5=de115425d3e578bfb5831120557517a6&pid=1-s2.0-S2096579622000420-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123177239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohib Ullah , Sareer Ul Amin , Muhammad Munsif , Muhammad Mudassar Yamin , Utkurbek Safaev , Habib Khan , Salman Khan , Habib Ullah
{"title":"Serious games in science education: a systematic literature","authors":"Mohib Ullah , Sareer Ul Amin , Muhammad Munsif , Muhammad Mudassar Yamin , Utkurbek Safaev , Habib Khan , Salman Khan , Habib Ullah","doi":"10.1016/j.vrih.2022.02.001","DOIUrl":"10.1016/j.vrih.2022.02.001","url":null,"abstract":"<div><p>Teaching science through computer games, simulations, and artificial intelligence (AI) is an increasingly active research field. To this end, we conducted a systematic literature review on serious games for science education to reveal research trends and patterns. We discussed the role of virtual reality (VR), AI, and augmented reality (AR) games in teaching science subjects like physics. Specifically, we covered the research spanning between 2011 and 2021, investigated country-wise concentration and most common evaluation methods, and discussed the positive and negative aspects of serious games in science education in particular and attitudes towards the use of serious games in education in general.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 189-209"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000201/pdf?md5=88ee50356fb17742bbff5a754acd90a6&pid=1-s2.0-S2096579622000201-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133346423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual quality assessment of panoramic stitched contents for immersive applications: a prospective survey","authors":"Hayat Ullah , Sitara Afzal , Imran Ullah Khan","doi":"10.1016/j.vrih.2022.03.004","DOIUrl":"10.1016/j.vrih.2022.03.004","url":null,"abstract":"<div><p>The recent advancements in the field of Virtual Reality (VR) and Augmented Reality (AR) have a substantial impact on modern day technology by digitizing each and everything related to human life and open the doors to the next generation Software Technology (Soft Tech). VR and AR technology provide astonishing immersive contents with the help of high quality stitched panoramic contents and 360° imagery that widely used in the education, gaming, entertainment, and production sector. The immersive quality of VR and AR contents are greatly dependent on the perceptual quality of panoramic or 360° images, in fact a minor visual distortion can significantly degrade the overall quality. Thus, to ensure the quality of constructed panoramic contents for VR and AR applications, numerous Stitched Image Quality Assessment (SIQA) methods have been proposed to assess the quality of panoramic contents before using in VR and AR. In this survey, we provide a detailed overview of the SIQA literature and exclusively focus on objective SIQA methods presented till date. For better understanding, the objective SIQA methods are classified into two classes namely Full-Reference SIQA and No-Reference SIQA approaches. Each class is further categorized into traditional and deep learning-based methods and examined their performance for SIQA task. Further, we shortlist the publicly available benchmark SIQA datasets and evaluation metrices used for quality assessment of panoramic contents. In last, we highlight the current challenges in this area based on the existing SIQA methods and suggest future research directions that need to be target for further improvement in SIQA domain.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 223-246"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000262/pdf?md5=31a80674d804c0f95bfedc53925d3c42&pid=1-s2.0-S2096579622000262-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115462630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AR-assisted children book for smart teaching and learning of Turkish alphabets","authors":"Ahmed L. Alyousify , Ramadhan J. Mstafa","doi":"10.1016/j.vrih.2022.05.002","DOIUrl":"10.1016/j.vrih.2022.05.002","url":null,"abstract":"<div><h3>Background</h3><p>Augmented reality (AR), virtual reality (VR), and remote-controlled devices are driving the need for a better 5G infrastructure to support faster data transmission. In this study, mobile AR is emphasized as a viable and widespread solution that can be easily scaled to millions of end-users and educators because it is lightweight and low-cost and can be implemented in a cross-platform manner. Low-efficiency smart devices and high latencies for real-time interactions via regular mobile networks are primary barriers to the use of AR in education. New 5G cellular networks can mitigate some of these issues via network slicing, device-to-device communication, and mobile edge computing.</p></div><div><h3>Methods</h3><p>In this study, we use a new technology to solve some of these problems. The proposed software monitors image targets on a printed book and renders 3D objects and alphabetic models. In addition, the application considers phonetics. The sound (phonetic) and 3D representation of a letter are played as soon as the image target is detected. 3D models of the Turkish alphabet are created by using Adobe Photoshop with Unity3D and Vuforia SDK.</p></div><div><h3>Results</h3><p>The proposed application teaches Turkish alphabets and phonetics by using 3D object models, 3D letters, and 3D phrases, including letters and sounds.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 263-277"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000432/pdf?md5=69d83e6557b258f371ddc091f0376da6&pid=1-s2.0-S2096579622000432-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131839835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy-preserving deep learning techniques for wearable sensor-based big data applications","authors":"Rafik Hamza, Dao Minh-Son","doi":"10.1016/j.vrih.2022.01.007","DOIUrl":"10.1016/j.vrih.2022.01.007","url":null,"abstract":"<div><p>Wearable technologies have the potential to become a valuable influence on human daily life where they may enable observing the world in new ways, including, for example, using augmented reality (AR) applications. Wearable technology uses electronic devices that may be carried as accessories, clothes, or even embedded in the user's body. Although the potential benefits of smart wearables are numerous, their extensive and continual usage creates several privacy concerns and tricky information security challenges. In this paper, we present a comprehensive survey of recent privacy-preserving big data analytics applications based on wearable sensors. We highlight the fundamental features of security and privacy for wearable device applications. Then, we examine the utilization of deep learning algorithms with cryptography and determine their usability for wearable sensors. We also present a case study on privacy-preserving machine learning techniques. Herein, we theoretically and empirically evaluate the privacy-preserving deep learning framework's performance. We explain the implementation details of a case study of a secure prediction service using the convolutional neural network (CNN) model and the Cheon-Kim-Kim-Song (CHKS) homomorphic encryption algorithm. Finally, we explore the obstacles and gaps in the deployment of practical real-world applications. Following a comprehensive overview, we identify the most important obstacles that must be overcome and discuss some interesting future research directions.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 3","pages":"Pages 210-222"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000237/pdf?md5=2c9c4d531b19450d41b2bc107e5adf4b&pid=1-s2.0-S2096579622000237-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124436400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EasyGaze: Hybrid eye tracking approach for handheld mobile devices","authors":"Shiwei Cheng, Qiufeng Ping, Jialing Wang, Yijian Chen","doi":"10.1016/j.vrih.2021.10.003","DOIUrl":"https://doi.org/10.1016/j.vrih.2021.10.003","url":null,"abstract":"<div><h3>Background</h3><p>Eye-tracking technology for mobile devices has made significant progress. However, owing to limited computing capacity and the complexity of context, the conventional image feature-based technology cannot extract features accurately, thus affecting the performance.</p></div><div><h3>Methods</h3><p>This study proposes a novel approach by combining appearance- and feature-based eye-tracking methods. Face and eye region detections were conducted to obtain features that were used as inputs to the appearance model to detect the feature points. The feature points were used to generate feature vectors, such as corner center-pupil center, by which the gaze fixation coordinates were calculated.</p></div><div><h3>Results</h3><p>To obtain feature vectors with the best performance, we compared different vectors under different image resolution and illumination conditions, and the results indicated that the average gaze fixation accuracy was achieved at a visual angle of 1.93° when the image resolution was 96 × 48 pixels, with light sources illuminating from the front of the eye.</p></div><div><h3>Conclusions</h3><p>Compared with the current methods, our method improved the accuracy of gaze fixation and it was more usable.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 2","pages":"Pages 173-188"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000146/pdf?md5=1b38d9d1e71d71edf48eca09534dcf7b&pid=1-s2.0-S2096579622000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72290201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Designing Generation Y Interactions: The Case of YPhone","authors":"Wei Liu","doi":"10.1016/j.vrih.2021.12.005","DOIUrl":"10.1016/j.vrih.2021.12.005","url":null,"abstract":"<div><h3>Background</h3><p>With more and more products becoming digital, mobile, and networked, paying attention to the qualities of interactions with is also getting more relevant. While interaction qualities have been addressed in several scientific studies, little attention is being paid to their implementation into a real life, everyday context. This paper describes the development of a novel office phone prototype, YPhone, which demonstrates the application of a specific set of Generation Y interaction qualities (instant, playful, collaborative, expressive, responsive, and flexible) into the context of office work. The working prototype supports office workers in experiencing new type of interactions. It is set out in practice in a series of evaluations. We found that the playful, expressive, responsive, and flexible qualities have more trust than the instant and collaborative qualities. Qualities can be grouped, although this may be different for different products that are evaluated, so researchers must be cautious about generalizing. The overall evaluation was positive with some valuable suggestions to its user interactions and features.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 2","pages":"Pages 132-152"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579621001078/pdf?md5=37d6cba6254b5ed14bd6a4c6d9a1e4ed&pid=1-s2.0-S2096579621001078-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122669936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motivation Effect of Animated Pedagogical Agent’s Personality and Feedback Strategy Types on Learning in Virtual Training Environment","authors":"Yulong Bian","doi":"10.1016/j.vrih.2021.11.001","DOIUrl":"10.1016/j.vrih.2021.11.001","url":null,"abstract":"<div><h3>Background</h3><p>The personality and feedback of an animated pedagogical agent (APA) are vital social-emotional features that render the agent perceptually believable. The effect of them on learning in virtual training remains to be examined.</p></div><div><h3>Methods</h3><p>In this paper, an explanation model was proposed to clarify the underlying mechanism of how these two features affect learners. Two studies were conducted to investigate the model. Study 1 reexamined the effect of APA’s personality type and feedback strategy on flow experience and performance, revealing significant effects of feedback strategy on flow and performance, as well as a marginal significant effect of personality type on performance. To explore the mechanism behind these effects, a theoretical model was proposed by distinguishing between intrinsic and extrinsic motivation effect. Study 2 tested the model and round that the APA’s personality type significantly influences factors in the path of extrinsic motivation effect rather than those in the path of intrinsic motivation effect.</p></div><div><h3>Results</h3><p>By contrast, feedback strategy significantly affected factors in the path of intrinsic motivation effect.</p></div><div><h3>Conclusions</h3><p>The proposed model was supported by these results; further distinguishing the two motivation effects is necessary to understand the respective effects of an APA’s personality and feedback features on learning experiences and outcomes.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 2","pages":"Pages 153-172"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579621000887/pdf?md5=1de5f739000f23f77f18d23f36ffc6d4&pid=1-s2.0-S2096579621000887-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129340391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigation in virtual and real environment using brain computer interface:a progress report","authors":"Haochen Hu , Yue Liu , Kang Yue , Yongtian Wang","doi":"10.1016/j.vrih.2021.10.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2021.10.002","url":null,"abstract":"<div><p>A brain-computer interface (BCI) facilitates bypassing the peripheral nervous system and directly communicating with surrounding devices. Navigation technology using BCI has developed—from exploring the prototype paradigm in the virtual environment (VE) to accurately completing the locomotion intention of the operator in the form of a powered wheelchair or mobile robot in a real environment. This paper summarizes BCI navigation applications that have been used in both real and VEs in the past 20 years. Horizontal comparisons were conducted between various paradigms applied to BCI and their unique signal-processing methods. Owing to the shift in the control mode from synchronous to asynchronous, the development trend of navigation applications in the VE was also reviewed. The contrast between highlevel commands and low-level commands is introduced as the main line to review the two major applications of BCI navigation in real environments: mobile robots and unmanned aerial vehicles (UAVs). Finally, applications of BCI navigation to scenarios outside the laboratory; research challenges, including human factors in navigation application interaction design; and the feasibility of hybrid BCI for BCI navigation are discussed in detail.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 2","pages":"Pages 89-114"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000134/pdf?md5=5a4f8528dc3cdff184c54e71eee21d0b&pid=1-s2.0-S2096579622000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72290203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Ren, Chun Yu, Yueting Weng, Chengchi Zhou, Yuanchun Shi
{"title":"Design and Evaluation of Window Management Operations in AR Headset + Smartphone Interface","authors":"Jie Ren, Chun Yu, Yueting Weng, Chengchi Zhou, Yuanchun Shi","doi":"10.1016/j.vrih.2021.12.002","DOIUrl":"10.1016/j.vrih.2021.12.002","url":null,"abstract":"<div><h3>Background</h3><p>Combining the use of an AR headset and a smartphone can provide wider display and precise touch input simultaneously; it can redefine the way we use applications today. Unfortunately, users are deprived of such benefit because of the independence of two devices. There lacks a kind of intuitive and direct interactions across them. In this paper, we conduct a formative study to understand the window management requirements and interaction preferences of using an AR headset and a smartphone simultaneously and report the insights we gained. Also, we introduce an example vocabulary of window management operations in AR headset + smartphone interface. It allows users to manipulate windows in virtual space and shift windows between devices efficiently and seamlessly.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"4 2","pages":"Pages 115-131"},"PeriodicalIF":0.0,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579621001017/pdf?md5=d9e84d57b66e73196cd29d773375f889&pid=1-s2.0-S2096579621001017-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115876774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}