Virtual Reality Intelligent Hardware最新文献

筛选
英文 中文
Opening Design using Bayesian Optimization 利用贝叶斯优化进行开局设计
Virtual Reality Intelligent Hardware Pub Date : 2023-12-01 DOI: 10.1016/j.vrih.2023.06.001
Nick Vitsas, Iordanis Evangelou, Georgios Papaioannou, Anastasios Gkaravelis
{"title":"Opening Design using Bayesian Optimization","authors":"Nick Vitsas,&nbsp;Iordanis Evangelou,&nbsp;Georgios Papaioannou,&nbsp;Anastasios Gkaravelis","doi":"10.1016/j.vrih.2023.06.001","DOIUrl":"10.1016/j.vrih.2023.06.001","url":null,"abstract":"<div><h3>Background</h3><p>Opening design is a major consideration in architectural buildings during early structural layout specification. Decisions regarding the geometric characteristics of windows, skylights, hatches, etc., greatly impact the overall energy efficiency, airflow and appearance of a building, both internally and externally.</p></div><div><h3>Methods</h3><p>In this work, we employ a goal-based, illumination-driven approach to opening design using a Bayesian Optimization approach, based on Gaussian Processes. A method is proposed that allows a designer to easily set lighting intentions along with qualitative and quantitative characteristics of desired openings.</p></div><div><h3>Results</h3><p>All parameters are optimized within a cost minimization framework to calculate geometrically feasible, architecturally admissible and aesthetically pleasing openings of any desired shape, while respecting the designer's lighting constraints.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 6","pages":"Pages 550-564"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000256/pdf?md5=ed06d6b285e4130cc14d1417bdef7cd4&pid=1-s2.0-S2096579623000256-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139024240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PartLabeling: A Label Management Framework in 3D Space 部件标签:三维空间中的标签管理框架
Virtual Reality Intelligent Hardware Pub Date : 2023-12-01 DOI: 10.1016/j.vrih.2023.06.004
Semir Elezovikj , Jianqing Jia , Chiu C. Tan , Haibin Ling
{"title":"PartLabeling: A Label Management Framework in 3D Space","authors":"Semir Elezovikj ,&nbsp;Jianqing Jia ,&nbsp;Chiu C. Tan ,&nbsp;Haibin Ling","doi":"10.1016/j.vrih.2023.06.004","DOIUrl":"10.1016/j.vrih.2023.06.004","url":null,"abstract":"<div><p>In this work, we focus on the label layout problem: specifying the positions of overlaid virtual annotations in Virtual/Augmented Reality scenarios. Designing a layout of labels that does not violate domain-specific design requirements, while at the same time satisfying aesthetic and functional principles of good design, can be a daunting task even for skilled visual designers. Presenting the annotations in 3D object space instead of projection space, allows for the preservation of spatial and depth cues. This results in stable layouts in dynamic environments, since the annotations are anchored in 3D space. In this paper we make two major contributions. First, we propose a technique for managing the layout and rendering of annotations in Virtual/Augmented Reality scenarios by manipulating the annotations directly in 3D space. For this, we make use of Artificial Potential Fields and use 3D geometric constraints to adapt them in 3D space. Second, we introduce PartLabeling: an open source platform in the form of a web application that acts as a much-needed generic framework allowing to easily add labeling algorithms and 3D models. This serves as a catalyst for researchers in this field to make their algorithms and implementations publicly available, as well as ensure research reproducibility. The PartLabeling framework relies on a dataset that we generate as a subset of the original PartNet dataset [17] consisting of models suitable for the label management task. The dataset consists of 1,000 3D models with part annotations.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 6","pages":"Pages 490-508"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579623000347/pdf?md5=057ef6e2f709bc02a8c7c5a29fd317da&pid=1-s2.0-S2096579623000347-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139025326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of natural hand gestures in holograms for 3D object manipulation 在全息图中实现自然手势,用于3D对象操作
Virtual Reality Intelligent Hardware Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2023.02.001
Ajune Wanis Ismail , Muhammad Akma Iman
{"title":"Implementation of natural hand gestures in holograms for 3D object manipulation","authors":"Ajune Wanis Ismail ,&nbsp;Muhammad Akma Iman","doi":"10.1016/j.vrih.2023.02.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.02.001","url":null,"abstract":"<div><p>Holograms provide a characteristic manner to display and convey information, and have been improved to provide better user interactions Holographic interactions are important as they improve user interactions with virtual objects. Gesture interaction is a recent research topic, as it allows users to use their bare hands to directly interact with the hologram. However, it remains unclear whether real hand gestures are well suited for hologram applications. Therefore, we discuss the development process and implementation of three-dimensional object manipulation using natural hand gestures in a hologram. We describe the design and development process for hologram applications and its integration with real hand gesture interactions as initial findings. Experimental results from Nasa TLX form are discussed. Based on the findings, we actualize the user interactions in the hologram.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 5","pages":"Pages 439-450"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of lightweighting methods of huge 3D models for online Web3D visualization 面向Web3D在线可视化的海量三维模型轻量化方法研究
Virtual Reality Intelligent Hardware Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2020.02.002
Xiaojun Liu , Jinyuan Jia , Chang Liu
{"title":"Survey of lightweighting methods of huge 3D models for online Web3D visualization","authors":"Xiaojun Liu ,&nbsp;Jinyuan Jia ,&nbsp;Chang Liu","doi":"10.1016/j.vrih.2020.02.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2020.02.002","url":null,"abstract":"<div><h3>Background</h3><p>With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system load and resource consumption in the processing of these huge models, the huge 3D model lightweighting methods for online Web3D visualization are reviewed in this paper.</p></div><div><h3>Methods</h3><p>By observing the geometry redundancy introduced by man-made operations in the modeling procedure, several categories of lightweighting related work that aim at reducing the amount of data and resource consumption are elaborated for Web3D visualization.</p></div><div><h3>Results</h3><p>By comparing perspectives, the characteristics of each method are summarized, and among the reviewed methods, the geometric redundancy removal that achieves the lightweight goal by detecting and removing the repeated components is an appropriate method for current online Web3D visualization. Meanwhile, the learning algorithm, still in improvement period at present, is our expected future research topic.</p></div><div><h3>Conclusions</h3><p>Various aspects should be considered in an efficient lightweight method for online Web3D visualization, such as characteristics of original data, combination or extension of existing methods, scheduling strategy, cache management, and rendering mechanism. Meanwhile, innovation methods, particularly the learning algorithm, are worth exploring.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 5","pages":"Pages 395-406"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-reinforcement-learning-based robot motion strategies for grabbing objects from human hands 基于深度强化学习的机器人从人手中抓取物体的运动策略
Virtual Reality Intelligent Hardware Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2022.12.001
Zeyuan Cai , Zhiquan Feng , Liran Zhou , Xiaohui Yang , Tao Xu
{"title":"Deep-reinforcement-learning-based robot motion strategies for grabbing objects from human hands","authors":"Zeyuan Cai ,&nbsp;Zhiquan Feng ,&nbsp;Liran Zhou ,&nbsp;Xiaohui Yang ,&nbsp;Tao Xu","doi":"10.1016/j.vrih.2022.12.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.12.001","url":null,"abstract":"<div><h3>Background</h3><p>Robot grasping encompasses a wide range of research areas; however, most studies have been focused on the grasping of only stationary objects in a scene; only a few studies on how to grasp objects from a user's hand have been conducted. In this paper, a robot grasping algorithm based on deep reinforcement learning (RGRL) is proposed.</p></div><div><h3>Methods</h3><p>The RGRL takes the relative positions of the robot and the object in a user's hand as input and outputs the best action of the robot in the current state. Thus, the proposed algorithm realizes the functions of autonomous path planning and grasping objects safely from the hands of users. A new method for improving the safety of human–robot cooperation is explored. To solve the problems of a low utilization rate and slow convergence of reinforcement learning algorithms, the RGRL is first trained in a simulation scene, and then, the model parameters are applied to a real scene. To reduce the difference between the simulated and real scenes, domain randomization is applied to randomly change the positions and angles of objects in the simulated scenes at regular intervals, thereby improving the diversity of the training samples and robustness of the algorithm.</p></div><div><h3>Results</h3><p>The RGRL's effectiveness and accuracy are verified by evaluating it on both simulated and real scenes, and the results show that the RGRL can achieve an accuracy of more than 80% in both cases.</p></div><div><h3>Conclusions</h3><p>RGRL is a robot grasping algorithm that employs domain randomization and deep reinforcement learning for effective grasping in simulated and real scenes. However, it lacks flexibility in adapting to different grasping poses, prompting future research in achieving safe grasping for diverse user postures.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 5","pages":"Pages 407-421"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye-shaped keyboard for dual-hand text entry in virtual reality 虚拟现实中用于双手文本输入的眼形键盘
Virtual Reality Intelligent Hardware Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2023.07.001
Kangyu Wang , Yangqiu Yan , Hao Zhang , Xiaolong Liu , Lili Wang
{"title":"Eye-shaped keyboard for dual-hand text entry in virtual reality","authors":"Kangyu Wang ,&nbsp;Yangqiu Yan ,&nbsp;Hao Zhang ,&nbsp;Xiaolong Liu ,&nbsp;Lili Wang","doi":"10.1016/j.vrih.2023.07.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.07.001","url":null,"abstract":"<div><p>We propose an eye-shaped keyboard for high-speed text entry in virtual reality (VR), having the shape of dual eyes with characters arranged along the curved eyelids, which ensures low density and short spacing of the keys. The eye-shaped keyboard references the QWERTY key sequence, allowing the users to benefit from their experience using the QWERTY keyboard. The user interacts with an eye-shaped keyboard using rays controlled with both the hands. A character can be entered in one step by moving the rays from the inner eye regions to regions of the characters. A high-speed auto-complete system was designed for the eye-shaped keyboard. We conducted a pilot study to determine the optimal parameters, and a user study to compare our eye-shaped keyboard with the QWERTY and circular keyboards. For beginners, the eye-shaped keyboard performed significantly more efficiently and accurately with less task load and hand movement than the circular keyboard. Compared with the QWERTY keyboard, the eye-shaped keyboard is more accurate and significantly reduces hand translation while maintaining similar efficiency. Finally, to evaluate the potential of eye-shaped keyboards, we conducted another user study. In this study, the participants were asked to type continuously for three days using the proposed eye-shaped keyboard, with two sessions per day. In each session, participants were asked to type for 20min, and then their typing performance was tested. The eye-shaped keyboard was proven to be efficient and promising, with an average speed of 19.89 words per minute (WPM) and mean uncorrected error rate of 1.939%. The maximum speed reached 24.97 WPM after six sessions and continued to increase.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 5","pages":"Pages 451-469"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71729298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel learning framework for optimal multi-object video trajectory tracking 一种新的多目标视频轨迹优化学习框架
Virtual Reality Intelligent Hardware Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2023.04.001
Siyuan Chen, Xiaowu Hu, Wenying Jiang, Wen Zhou, Xintao Ding
{"title":"Novel learning framework for optimal multi-object video trajectory tracking","authors":"Siyuan Chen,&nbsp;Xiaowu Hu,&nbsp;Wenying Jiang,&nbsp;Wen Zhou,&nbsp;Xintao Ding","doi":"10.1016/j.vrih.2023.04.001","DOIUrl":"https://doi.org/10.1016/j.vrih.2023.04.001","url":null,"abstract":"<div><h3>Background</h3><p>With the rapid development of Web3D, virtual reality, and digital twins, virtual trajectories and decision data considerably rely on the analysis and understanding of real video data, particularly in emergency evacuation scenarios. Correctly and effectively evacuating crowds in virtual emergency scenarios are becoming increasingly urgent. One good solution is to extract pedestrian trajectories from videos of emergency situations using a multi-target tracking algorithm and use them to define evacuation procedures.</p></div><div><h3>Methods</h3><p>To implement this solution, a trajectory extraction and optimization framework based on multi-target tracking is developed in this study. First, a multi-target tracking algorithm is used to extract and preprocess the trajectory data of the crowd in a video. Then, the trajectory is optimized by combining the trajectory point extraction algorithm and Savitzky–Golay smoothing filtering method. Finally, related experiments are conducted, and the results show that the proposed approach can effectively and accurately extract the trajectories of multiple target objects in real time.</p></div><div><h3>Results</h3><p>In addition, the proposed approach retains the real characteristics of the trajectories as much as possible while improving the trajectory smoothing index, which can provide data support for the analysis of pedestrian trajectory data and formulation of personnel evacuation schemes in emergency scenarios.</p></div><div><h3>Conclusions</h3><p>Further comparisons with methods used in related studies confirm the feasibility and superiority of the proposed framework.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 5","pages":"Pages 422-438"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of real-time rendering on Web3D application Web3D实时渲染技术综述
Virtual Reality Intelligent Hardware Pub Date : 2023-10-01 DOI: 10.1016/j.vrih.2022.04.002
Geng Yu , Chang Liu , Ting Fang , Jinyuan Jia , Enming Lin , Yiqiang He , Siyuan Fu , Long Wang , Lei Wei , Qingyu Huang
{"title":"A survey of real-time rendering on Web3D application","authors":"Geng Yu ,&nbsp;Chang Liu ,&nbsp;Ting Fang ,&nbsp;Jinyuan Jia ,&nbsp;Enming Lin ,&nbsp;Yiqiang He ,&nbsp;Siyuan Fu ,&nbsp;Long Wang ,&nbsp;Lei Wei ,&nbsp;Qingyu Huang","doi":"10.1016/j.vrih.2022.04.002","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.04.002","url":null,"abstract":"<div><h3>Background</h3><p>In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories.</p></div><div><h3>Results</h3><p>Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field.</p></div><div><h3>Conclusions</h3><p>Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 5","pages":"Pages 379-394"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71728991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-pose estimation based on weak supervision 基于弱监督的人体姿态估计
Virtual Reality Intelligent Hardware Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.08.010
Xiaoyan Hu, Xizhao Bao, Guoli Wei, Zhaoyu Li
{"title":"Human-pose estimation based on weak supervision","authors":"Xiaoyan Hu,&nbsp;Xizhao Bao,&nbsp;Guoli Wei,&nbsp;Zhaoyu Li","doi":"10.1016/j.vrih.2022.08.010","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.010","url":null,"abstract":"<div><h3>Background</h3><p>In computer vision, simultaneously estimating human pose, shape, and clothing is a practical issue in real life, but remains a challenging task owing to the variety of clothing, complexity of deformation, shortage of large-scale datasets, and difficulty in estimating clothing style.</p></div><div><h3>Methods</h3><p>We propose a multistage weakly supervised method that makes full use of data with less labeled information for learning to estimate human body shape, pose, and clothing deformation. In the first stage, the SMPL human-body model parameters were regressed using the multi-view 2D key points of the human body. Using multi-view information as weakly supervised information can avoid the deep ambiguity problem of a single view, obtain a more accurate human posture, and access supervisory information easily. In the second stage, clothing is represented by a PCAbased model that uses two-dimensional key points of clothing as supervised information to regress the parameters. In the third stage, we predefine an embedding graph for each type of clothing to describe the deformation. Then, the mask information of the clothing is used to further adjust the deformation of the clothing. To facilitate training, we constructed a multi-view synthetic dataset that included BCNet and SURREAL.</p></div><div><h3>Results</h3><p>The Experiments show that the accuracy of our method reaches the same level as that of SOTA methods using strong supervision information while only using weakly supervised information. Because this study uses only weakly supervised information, which is much easier to obtain, it has the advantage of utilizing existing data as training data. Experiments on the DeepFashion2 dataset show that our method can make full use of the existing weak supervision information for fine-tuning on a dataset with little supervision information, compared with the strong supervision information that cannot be trained or adjusted owing to the lack of exact annotation information.</p></div><div><h3>Conclusions</h3><p>Our weak supervision method can accurately estimate human body size, pose, and several common types of clothing and overcome the issues of the current shortage of clothing data.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 366-377"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49848597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The validity analysis of the non-local mean filter and a derived novel denoising method 对非局部均值滤波器的有效性进行了分析,并提出了一种新的去噪方法
Virtual Reality Intelligent Hardware Pub Date : 2023-08-01 DOI: 10.1016/j.vrih.2022.08.017
Xiangyuan Liu, Zhongke Wu, Xingce Wang
{"title":"The validity analysis of the non-local mean filter and a derived novel denoising method","authors":"Xiangyuan Liu,&nbsp;Zhongke Wu,&nbsp;Xingce Wang","doi":"10.1016/j.vrih.2022.08.017","DOIUrl":"https://doi.org/10.1016/j.vrih.2022.08.017","url":null,"abstract":"<div><p>Image denoising is an important topic in the digital image processing field. This paper theoretically studies the validity of the classical non-local mean filter (NLM) for removing Gaussian noise from a novel statistic perspective. By regarding the restored image as an estimator of the clear image from the statistical view, we gradually analyse the unbiasedness and effectiveness of the restored value obtained by the NLM filter. Then, we propose an improved NLM algorithm called the clustering-based NLM filter (CNLM) that derived from the conditions obtained through the theoretical analysis. The proposed filter attempts to restore an ideal value using the approximately constant intensities obtained by the image clustering process. Here, we adopt a mixed probability model on a prefiltered image to generate an estimator of the ideal clustered components. The experimental results show that our algorithm obtains considerable improvement in peak signal-to-noise ratio (PSNR) values and visual results when removing Gaussian noise. On the other hand, the considerable practical performance of our filter shows that our method is theoretically acceptable as it can effectively estimates ideal images.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"5 4","pages":"Pages 338-350"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49897113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信