{"title":"An intelligent experimental container suite: using a chemical experiment with virtual-real fusion as an example","authors":"Lurong Yang , Zhiquan Feng , Junhong Meng","doi":"10.1016/j.vrih.2022.07.008","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.008","url":null,"abstract":"<div><h3>Background</h3><p>At present, the teaching of experiments in primary and secondary schools is affected by cost and security factors. The existing research on virtual-experiment platforms alleviates this problem. However, the lack of real experimental equipment and the use of a single channel to understand users’ intentions weaken these platforms operationally and degrade the naturalness of interactions. To slove the above problems,we propose an intelligent experimental container structure and a situational awareness algorithm,both of which are verified and then applied to a chemical experiment involving virtual-real fusion. First, acquired images are denoised in the visual channel, using maximum diffuse reflection chroma to remove overexposures. Second, container situational awareness is realized by segmenting the image liquid level and establishing a relation-fitting model. Then, strategies for constructing complete behaviors and making priority comparisons among behaviors are adopted for information complementarity and information independence, respectively. A multichannel intentional understanding model and an interactive paradigm fusing vision, hearing and touch are proposed. The results show that the designed experimental container and algorithm in a virtual chemical experiment platform can achieve a natural level of human-computer interaction, enhance the user's sense of operation, and achieve high user satisfaction.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 317-337"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Le Bi , Tingting Liu , Zhen Liu , Jason Teo , Yumeng Zhao , Yanjie Chai
{"title":"Modeling heterogeneous behaviors with different strategies in a terrorist attack","authors":"Le Bi , Tingting Liu , Zhen Liu , Jason Teo , Yumeng Zhao , Yanjie Chai","doi":"10.1016/j.vrih.2022.08.015","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.015","url":null,"abstract":"<div><p>In terrorist attack simulations, existing methods do not describe individual differences, which means different individuals will not have different behaviors. To address this problem, we propose a framework to model people’s heterogeneous behaviors in terrorist attack. For pedestrian, we construct an emotional model that takes into account its personality and visual perception. The emotional model is then combined with pedestrians' relationship networks to make the decision-making model. With the proposed decision-making model, pedestrian may have altruistic behaviors. For terrorist, a mapping model is developed to map its antisocial personality to its attacking strategy. The experiments show that the proposed algorithm can generate realistic heterogeneous behaviors that are consistent with existing psychological research findings.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 351-365"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Jiang, Wen-Jia Sun, Han-Fei Guo, Jia-Yuan Zeng, Xin Xue, Shuai Li
{"title":"A review of intelligent diagnosis methods of imaging gland cancer based on machine learning","authors":"Han Jiang, Wen-Jia Sun, Han-Fei Guo, Jia-Yuan Zeng, Xin Xue, Shuai Li","doi":"10.1016/j.vrih.2022.09.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.09.002","url":null,"abstract":"<div><h3>Background</h3><p>Gland cancer is a high-incidence disease endangering human health, and its early detection and treatment need efficient, accurate and objective intelligent diagnosis methods. In recent years, the advent of machine learning techniques has yielded satisfactory results in the intelligent gland cancer diagnosis based on clinical images, greatly improving the accuracy and efficiency of medical image interpretation while reducing the workload of doctors. The foci of this paper is to review, classify and analyze the intelligent diagnosis methods of imaging gland cancer based on machine learning and deep learning. To start with, the paper presents a brief introduction about some basic imaging principles of multi-modal medical images, such as the commonly used CT, MRI, US, PET, and pathology. In addition, the intelligent diagnosis methods of imaging gland cancer are further classified into supervised learning and weakly-supervised learning. Supervised learning consists of traditional machine learning methods like KNN, SVM, multilayer perceptron, etc. and deep learning methods evolving from CNN, meanwhile, weakly-supervised learning can be further categorized into active learning, semi-supervised learning and transfer learning. The state-of-the-art methods are illustrated with implementation details, including image segmentation, feature extraction, the optimization of classifiers, and their performances are evaluated through indicators like accuracy, precision and sensitivity. To conclude, the challenges and development trend of intelligent diagnosis methods of imaging gland cancer are addressed and discussed.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 293-316"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spyridon Nektarios Bolierakis, Margarita Kostovasili, Lazaros Karagiannidis, Dr. Angelos Amditis
{"title":"Training on LSA lifeboat operation using Mixed Reality","authors":"Spyridon Nektarios Bolierakis, Margarita Kostovasili, Lazaros Karagiannidis, Dr. Angelos Amditis","doi":"10.1016/j.vrih.2023.02.005","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.02.005","url":null,"abstract":"<div><h3>Background</h3><p>This work aims to provide an overview of the Mixed Reality (MR) technology’s use in maritime industry for training purposes. Current training procedures cover a broad range of procedural operations for Life-Saving Appliances (LSA) lifeboats; however, several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR. Augmented, Virtual and Mixed Reality applications are already used in various fields in maritime industry, but its full potential has not been yet exploited. SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.</p></div><div><h3>Methods</h3><p>An MR Training application is proposed supporting the training of crew members in equipment usage and operation, as well as in maintenance activities and procedures. The application consists of the training tool that trains crew members on handling lifeboats, the training evaluation tool that allows trainers to assess the performance of trainees and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats. For each tool, an indicative session and scenario workflow are implemented, along with the main supported interactions of the trainee with the equipment.</p></div><div><h3>Results</h3><p>The application has been tested and validated both in lab environment and using a real LSA lifeboat, resulting to improved experience for the users that provided feedback and recommendations for further development. The application has also been demonstrated onboard a cruise ship, showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.</p></div><div><h3>Conclusions</h3><p>The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members LSA lifeboat operation and maintenance, while it is still subject to improvement and further expansion.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 201-212"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Wenjiang Chen , Xiongfei Ye , Donghao Li , Ge Yang
{"title":"A Spatiotemporal Intelligent Framework and Experimental Platform for Urban Digital Twins","authors":"Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Wenjiang Chen , Xiongfei Ye , Donghao Li , Ge Yang","doi":"10.1016/j.vrih.2022.08.018","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.018","url":null,"abstract":"<div><p>This work emphasizes the current research status of the urban Digital Twins to establish an intelligent spatiotemporal framework. A Geospatial Artificial Intelligent (GeoAI) system is developed based on the Geographic Information System and Artificial Intelligence. It integrates multi-video technology and Virtual City in urban Digital Twins. Besides, an improved small object detection model is proposed: YOLOv5-Pyramid, and Siamese network video tracking models, namely MPSiam and FSSiamese, are established. Finally, an experimental platform is built to verify the georeferencing correction scheme of video images. The experimental results show that the Multiply-Accumulate value of MPSiam is 0.5B, and that of ResNet50-Siam is 4.5B. Besides, the model is compressed by 4.8 times. The inference speed has increased by 3.3 times, reaching 83 Frames Per Second. 3% of the Average Expectation Overlap is lost. Therefore, the urban Digital Twins-oriented GeoAI framework established here has excellent performance for video georeferencing and target detection problems.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 213-231"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive navigation assistance based on eye movement features in virtual reality","authors":"Song Zhao, Shiwei Cheng","doi":"10.1016/j.vrih.2022.07.003","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.003","url":null,"abstract":"<div><h3>Background</h3><p>Navigation assistance is very important for users when roaming in virtual reality scenes, however, the traditional navigation method requires users to manually request a map for viewing, which leads to low immersion and poor user experience.</p></div><div><h3>Methods</h3><p>To address this issue, first, we collected data when users need navigation assistance in a virtual reality environment, including various eye movement features such as gaze fixation, pupil size, and gaze angle, etc. After that, we used the Boostingbased XGBoost algorithm to train a prediction model, and finally used it to predict whether users need navigation assistance in a roaming task.</p></div><div><h3>Results</h3><p>After evaluating the performance of the model, the accuracy, precision, recall, and F1-score of our model reached about 95%. In addition, by applying the model to a virtual reality scene, an adaptive navigation assistance system based on the user’s real-time eye movement data was implemented.</p></div><div><h3>Conclusions</h3><p>Compared with traditional navigation assistance methods, our new adaptive navigation assistance could enable the user to be more immersive and effective during roaming in VR environment.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 232-248"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on AGV task path planning based on improved A* algorithm","authors":"Wang Xianwei , Ke Fuyang , Lu Jiajia","doi":"10.1016/j.vrih.2022.11.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.11.002","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, automatic guided vehicles (AGVs) have developed rapidly and been widely applied in intelligent transportation, cargo assembly, military testing, and other fields. One of the key issues in these applications is path planning. Global path planning results based on known environmental information are used as the ideal path for AGVs combined with local path planning to achieve safe and fast arrival at the destination. The global planning method planning results as the ideal path should meet the requirements of as few turns as possible, short planning time, and continuous path curvature.</p></div><div><h3>Methods</h3><p>We propose a global path-planning method based on an improved A * algorithm. And the robustness of the algorithm is verified by simulation experiments in typical multi obstacles and indoor scenarios. To improve the efficiency of pathfinding time, we increase the heuristic information weight of the target location and avoided the invalid cost calculation of the obstacle areas in the dynamic programming process. Then, the optimality of the number of turns in the path is ensured based on the turning node backtracking optimization method. Since the final global path needs to satisfy the AGV kinematic constraints and the curvature continuity condition, we adopt a curve smoothing scheme and select the optimal result that meets the constraints.</p></div><div><h3>Conclusions</h3><p>Simulation results show that the improved algorithm proposed in this paper outperforms the traditional method and can help AGVs improve the efficiency of task execution by efficiently planning a path with low complexity and smoothness. Additionally, this scheme provides a new solution for global path planning of unmanned vehicles.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 249-265"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A data-based real-time petrochemical gas diffusion simulation approach on virtual reality","authors":"Min Yang , Yong Han , Chang Su , Xue Li","doi":"10.1016/j.vrih.2023.01.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.01.001","url":null,"abstract":"<div><h3>Background</h3><p>Petrochemical products are flammable, explosive, and toxic, petrochemical accidents are generally extremely destructive. Therefore, disaster analysis and prediction and real-time simulation have become important means to control and reduce accident hazards.</p></div><div><h3>Methods</h3><p>In this study, a complete real-time simulation solution of gas diffusion with coordinate data and concentration data is proposed, which is mainly aimed at the simulation of the types of harmful gas leakage and diffusion accidents in the petrochemical industry. The rendering effect is more continuous and accurate through grid homogenization and trilinear interpolation. A data processing and rendering parallelization process is presented to improve simulation efficiency. Combines gas concentration and fragment transparency to synthesize transparent pixels in a scene. To ensure the approximate accuracy of the rendering effect, improve the efficiency of real-time rendering, and meet the requirement of intuitive perception using concentration data, a weighted blended order-independent transparency with enhanced alpha weight is presented, which can provide a more intuitive perception of hierarchical information of concentration data while preserving depth information. In this study, three order-independent transparency algorithms, depth peeling algorithm, weighted blended order-independent transparency and weighted blended order-independent transparency with enhanced alpha weight, are compared and analyzed for rendering image quality, rendering time, required memory, hierarchical information and so on.</p></div><div><h3>Results</h3><p>Using weighted blended order-independent transparency with enhanced alpha weight technique, the rendering time is shortened by 53.2% compared with the depth peeling algorithm, and the texture memory required is much smaller than the depth peeling algorithm. The rendering results of weighted blended order-independent transparency with enhanced alpha weight are approximately accurate compared with the depth peeling algorithm as ground truth, and there is no popping when surfaces pass through one another. At the same time, compared with weighted blended order-independent transparency, weighted blended OIT with enhanced alpha weight achieves an intuitive perception of hierarchical information of concentration data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 266-278"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Improvement of Iterative Closest Point with Edges of Projected Image","authors":"Chen Wang","doi":"10.1016/j.vrih.2022.09.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.09.001","url":null,"abstract":"<div><h3>Background</h3><p>There are many regular-shape objects in the artificial environment. It is difficult to distinguish the poses of these objects, when only geometric information is utilized. With the development of sensor technologies, we can utilize other information to solve this problem.</p></div><div><h3>Methods</h3><p>We propose an algorithm to register point clouds by integrating color information. The key idea of the algorithm is that we jointly optimize dense term and edge term. The dense term is built similarly to iterative closest point algorithm. In order to build the edge term, we extract the edges of the images obtained by projecting the point clouds. The edge term prevents the point clouds from sliding in registration. We utilize this loosely coupled method to fuse geometric and color information.</p></div><div><h3>Results</h3><p>The experiments demonstrate that edge image approach improves the precision and the algorithm is robust.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 279-291"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Liu , Yancong Zhu , Ruonan Huang , Takumi Ohashi , Jan Auernhammer , Xiaonan Zhang , Ce Shi , Lu Wang
{"title":"Designing interactive glazing through an engineering psychology approach: Six augmented reality scenarios that envision future car human-machine interface","authors":"Wei Liu , Yancong Zhu , Ruonan Huang , Takumi Ohashi , Jan Auernhammer , Xiaonan Zhang , Ce Shi , Lu Wang","doi":"10.1016/j.vrih.2022.07.004","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.004","url":null,"abstract":"<div><p>With more and more vehicles becoming autonomous, intelligent, and connected, paying attention to the future usage of car human-machine interface (HMI) with these vehicles should also get more relevant. While car HMI has been addressed in several scientific studies, little attention is being paid to designing and implementing interactive glazing into everyday (autonomous) driving contexts. Through reflecting on what was found before in theory and practice, we describe an engineering psychology practice and the design of six novel future user scenarios, which envision the application of a specific set of augmented reality (AR) support user interactions. We also present evaluations conducted with the scenarios and experiential prototypes and found that these AR scenarios support our target user groups in experiencing a new type of interactions. The overall evaluation was positive, with some valuable assessment results and suggestions. We envision that this paper will interest applied psychology educators who aspire to teach how to operationalize AR in a human-centered design (HCD) process to students with little preexisting expertise or little scientific knowledge about engineering psychology.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 2","pages":"Pages 157-170"},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49891616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}