Geng Yu , Chang Liu , Ting Fang , Jinyuan Jia , Enming Lin , Yiqiang He , Siyuan Fu , Long Wang , Lei Wei , Qingyu Huang
{"title":"A survey of real-time rendering on Web3D application","authors":"Geng Yu , Chang Liu , Ting Fang , Jinyuan Jia , Enming Lin , Yiqiang He , Siyuan Fu , Long Wang , Lei Wei , Qingyu Huang","doi":"10.1016/j.vrih.2022.04.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.04.002","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories.</p></div><div><h3>Results</h3><p>Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field.</p></div><div><h3>Conclusions</h3><p>Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 5","pages":"Pages 379-394"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-pose estimation based on weak supervision","authors":"Xiaoyan Hu, Xizhao Bao, Guoli Wei, Zhaoyu Li","doi":"10.1016/j.vrih.2022.08.010","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.010","url":null,"abstract":"<div><h3>Background</h3><p>In computer vision, simultaneously estimating human pose, shape, and clothing is a practical issue in real life, but remains a challenging task owing to the variety of clothing, complexity of deformation, shortage of large-scale datasets, and difficulty in estimating clothing style.</p></div><div><h3>Methods</h3><p>We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape, pose, and clothing deformation. In the first stage, the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body. Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view, obtain a more accurate human posture, and access supervisory information easily. In the second stage, clothing is represented by a PCAbased model that uses two-dimensional key points of clothing as supervised information to regress the parameters. In the third stage, we predefine an embedding graph for each type of clothing to describe the deformation. Then, the mask information of the clothing is used to further adjust the deformation of the clothing. To facilitate training, we constructed a multi-view synthetic dataset that included BCNet and SURREAL.</p></div><div><h3>Results</h3><p>The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information. Because this study uses only weakly supervised information, which is much easier to obtain, it has the advantage of utilizing existing data as training data. Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information, compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.</p></div><div><h3>Conclusions</h3><p>Our weak supervision method can accurately estimate human body size, pose, and several common types of clothing and overcome the issues of the current shortage of clothing data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 366-377"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The validity analysis of the non-local mean filter and a derived novel denoising method","authors":"Xiangyuan Liu, Zhongke Wu, Xingce Wang","doi":"10.1016/j.vrih.2022.08.017","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.017","url":null,"abstract":"<div><p>Image denoising is an important topic in the digital image processing field. This paper theoretically studies the validity of the classical non-local mean filter (NLM) for removing Gaussian noise from a novel statistic perspective. By regarding the restored image as an estimator of the clear image from the statistical view, we gradually analyse the unbiasedness and effectiveness of the restored value obtained by the NLM filter. Then, we propose an improved NLM algorithm called the clustering-based NLM filter (CNLM) that derived from the conditions obtained through the theoretical analysis. The proposed filter attempts to restore an ideal value using the approximately constant intensities obtained by the image clustering process. Here, we adopt a mixed probability model on a prefiltered image to generate an estimator of the ideal clustered components. The experimental results show that our algorithm obtains considerable improvement in peak signal-to-noise ratio (PSNR) values and visual results when removing Gaussian noise. On the other hand, the considerable practical performance of our filter shows that our method is theoretically acceptable as it can effectively estimates ideal images.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 338-350"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49897113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An intelligent experimental container suite: using a chemical experiment with virtual-real fusion as an example","authors":"Lurong Yang , Zhiquan Feng , Junhong Meng","doi":"10.1016/j.vrih.2022.07.008","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.008","url":null,"abstract":"<div><h3>Background</h3><p>At present, the teaching of experiments in primary and secondary schools is affected by cost and security factors. The existing research on virtual-experiment platforms alleviates this problem. However, the lack of real experimental equipment and the use of a single channel to understand users’ intentions weaken these platforms operationally and degrade the naturalness of interactions. To slove the above problems,we propose an intelligent experimental container structure and a situational awareness algorithm,both of which are verified and then applied to a chemical experiment involving virtual-real fusion. First, acquired images are denoised in the visual channel, using maximum diffuse reflection chroma to remove overexposures. Second, container situational awareness is realized by segmenting the image liquid level and establishing a relation-fitting model. Then, strategies for constructing complete behaviors and making priority comparisons among behaviors are adopted for information complementarity and information independence, respectively. A multichannel intentional understanding model and an interactive paradigm fusing vision, hearing and touch are proposed. The results show that the designed experimental container and algorithm in a virtual chemical experiment platform can achieve a natural level of human-computer interaction, enhance the user's sense of operation, and achieve high user satisfaction.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 317-337"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Le Bi , Tingting Liu , Zhen Liu , Jason Teo , Yumeng Zhao , Yanjie Chai
{"title":"Modeling heterogeneous behaviors with different strategies in a terrorist attack","authors":"Le Bi , Tingting Liu , Zhen Liu , Jason Teo , Yumeng Zhao , Yanjie Chai","doi":"10.1016/j.vrih.2022.08.015","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.015","url":null,"abstract":"<div><p>In terrorist attack simulations, existing methods do not describe individual differences, which means different individuals will not have different behaviors. To address this problem, we propose a framework to model people’s heterogeneous behaviors in terrorist attack. For pedestrian, we construct an emotional model that takes into account its personality and visual perception. The emotional model is then combined with pedestrians' relationship networks to make the decision-making model. With the proposed decision-making model, pedestrian may have altruistic behaviors. For terrorist, a mapping model is developed to map its antisocial personality to its attacking strategy. The experiments show that the proposed algorithm can generate realistic heterogeneous behaviors that are consistent with existing psychological research findings.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 351-365"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Jiang, Wen-Jia Sun, Han-Fei Guo, Jia-Yuan Zeng, Xin Xue, Shuai Li
{"title":"A review of intelligent diagnosis methods of imaging gland cancer based on machine learning","authors":"Han Jiang, Wen-Jia Sun, Han-Fei Guo, Jia-Yuan Zeng, Xin Xue, Shuai Li","doi":"10.1016/j.vrih.2022.09.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.09.002","url":null,"abstract":"<div><h3>Background</h3><p>Gland cancer is a high-incidence disease endangering human health, and its early detection and treatment need efficient, accurate and objective intelligent diagnosis methods. In recent years, the advent of machine learning techniques has yielded satisfactory results in the intelligent gland cancer diagnosis based on clinical images, greatly improving the accuracy and efficiency of medical image interpretation while reducing the workload of doctors. The foci of this paper is to review, classify and analyze the intelligent diagnosis methods of imaging gland cancer based on machine learning and deep learning. To start with, the paper presents a brief introduction about some basic imaging principles of multi-modal medical images, such as the commonly used CT, MRI, US, PET, and pathology. In addition, the intelligent diagnosis methods of imaging gland cancer are further classified into supervised learning and weakly-supervised learning. Supervised learning consists of traditional machine learning methods like KNN, SVM, multilayer perceptron, etc. and deep learning methods evolving from CNN, meanwhile, weakly-supervised learning can be further categorized into active learning, semi-supervised learning and transfer learning. The state-of-the-art methods are illustrated with implementation details, including image segmentation, feature extraction, the optimization of classifiers, and their performances are evaluated through indicators like accuracy, precision and sensitivity. To conclude, the challenges and development trend of intelligent diagnosis methods of imaging gland cancer are addressed and discussed.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 293-316"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spyridon Nektarios Bolierakis, Margarita Kostovasili, Lazaros Karagiannidis, Dr. Angelos Amditis
{"title":"Training on LSA lifeboat operation using Mixed Reality","authors":"Spyridon Nektarios Bolierakis, Margarita Kostovasili, Lazaros Karagiannidis, Dr. Angelos Amditis","doi":"10.1016/j.vrih.2023.02.005","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.02.005","url":null,"abstract":"<div><h3>Background</h3><p>This work aims to provide an overview of the Mixed Reality (MR) technology’s use in maritime industry for training purposes. Current training procedures cover a broad range of procedural operations for Life-Saving Appliances (LSA) lifeboats; however, several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR. Augmented, Virtual and Mixed Reality applications are already used in various fields in maritime industry, but its full potential has not been yet exploited. SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.</p></div><div><h3>Methods</h3><p>An MR Training application is proposed supporting the training of crew members in equipment usage and operation, as well as in maintenance activities and procedures. The application consists of the training tool that trains crew members on handling lifeboats, the training evaluation tool that allows trainers to assess the performance of trainees and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats. For each tool, an indicative session and scenario workflow are implemented, along with the main supported interactions of the trainee with the equipment.</p></div><div><h3>Results</h3><p>The application has been tested and validated both in lab environment and using a real LSA lifeboat, resulting to improved experience for the users that provided feedback and recommendations for further development. The application has also been demonstrated onboard a cruise ship, showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.</p></div><div><h3>Conclusions</h3><p>The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members LSA lifeboat operation and maintenance, while it is still subject to improvement and further expansion.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 201-212"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Wenjiang Chen , Xiongfei Ye , Donghao Li , Ge Yang
{"title":"A Spatiotemporal Intelligent Framework and Experimental Platform for Urban Digital Twins","authors":"Jinxing Hu , Zhihan Lv , Diping Yuan , Bing He , Wenjiang Chen , Xiongfei Ye , Donghao Li , Ge Yang","doi":"10.1016/j.vrih.2022.08.018","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.018","url":null,"abstract":"<div><p>This work emphasizes the current research status of the urban Digital Twins to establish an intelligent spatiotemporal framework. A Geospatial Artificial Intelligent (GeoAI) system is developed based on the Geographic Information System and Artificial Intelligence. It integrates multi-video technology and Virtual City in urban Digital Twins. Besides, an improved small object detection model is proposed: YOLOv5-Pyramid, and Siamese network video tracking models, namely MPSiam and FSSiamese, are established. Finally, an experimental platform is built to verify the georeferencing correction scheme of video images. The experimental results show that the Multiply-Accumulate value of MPSiam is 0.5B, and that of ResNet50-Siam is 4.5B. Besides, the model is compressed by 4.8 times. The inference speed has increased by 3.3 times, reaching 83 Frames Per Second. 3% of the Average Expectation Overlap is lost. Therefore, the urban Digital Twins-oriented GeoAI framework established here has excellent performance for video georeferencing and target detection problems.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 213-231"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive navigation assistance based on eye movement features in virtual reality","authors":"Song Zhao, Shiwei Cheng","doi":"10.1016/j.vrih.2022.07.003","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.07.003","url":null,"abstract":"<div><h3>Background</h3><p>Navigation assistance is very important for users when roaming in virtual reality scenes, however, the traditional navigation method requires users to manually request a map for viewing, which leads to low immersion and poor user experience.</p></div><div><h3>Methods</h3><p>To address this issue, first, we collected data when users need navigation assistance in a virtual reality environment, including various eye movement features such as gaze fixation, pupil size, and gaze angle, etc. After that, we used the Boostingbased XGBoost algorithm to train a prediction model, and finally used it to predict whether users need navigation assistance in a roaming task.</p></div><div><h3>Results</h3><p>After evaluating the performance of the model, the accuracy, precision, recall, and F1-score of our model reached about 95%. In addition, by applying the model to a virtual reality scene, an adaptive navigation assistance system based on the user’s real-time eye movement data was implemented.</p></div><div><h3>Conclusions</h3><p>Compared with traditional navigation assistance methods, our new adaptive navigation assistance could enable the user to be more immersive and effective during roaming in VR environment.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 232-248"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on AGV task path planning based on improved A* algorithm","authors":"Wang Xianwei , Ke Fuyang , Lu Jiajia","doi":"10.1016/j.vrih.2022.11.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.11.002","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, automatic guided vehicles (AGVs) have developed rapidly and been widely applied in intelligent transportation, cargo assembly, military testing, and other fields. One of the key issues in these applications is path planning. Global path planning results based on known environmental information are used as the ideal path for AGVs combined with local path planning to achieve safe and fast arrival at the destination. The global planning method planning results as the ideal path should meet the requirements of as few turns as possible, short planning time, and continuous path curvature.</p></div><div><h3>Methods</h3><p>We propose a global path-planning method based on an improved A * algorithm. And the robustness of the algorithm is verified by simulation experiments in typical multi obstacles and indoor scenarios. To improve the efficiency of pathfinding time, we increase the heuristic information weight of the target location and avoided the invalid cost calculation of the obstacle areas in the dynamic programming process. Then, the optimality of the number of turns in the path is ensured based on the turning node backtracking optimization method. Since the final global path needs to satisfy the AGV kinematic constraints and the curvature continuity condition, we adopt a curve smoothing scheme and select the optimal result that meets the constraints.</p></div><div><h3>Conclusions</h3><p>Simulation results show that the improved algorithm proposed in this paper outperforms the traditional method and can help AGVs improve the efficiency of task execution by efficiently planning a path with low complexity and smoothness. Additionally, this scheme provides a new solution for global path planning of unmanned vehicles.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 3","pages":"Pages 249-265"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49833144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}