Virtual Reality Intelligent Hardware最新文献

筛选
英文 中文
Survey of neurocognitive disorder detection methods based on speech, visual, and virtual reality technologies
Virtual Reality Intelligent Hardware Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.08.001
Tian ZHENG , Xinheng WANG , Xiaolan PENG , Ning SU , Tianyi XU , Xurong XIE , Jin HUANG , Lun XIE , Feng TIAN
{"title":"Survey of neurocognitive disorder detection methods based on speech, visual, and virtual reality technologies","authors":"Tian ZHENG ,&nbsp;Xinheng WANG ,&nbsp;Xiaolan PENG ,&nbsp;Ning SU ,&nbsp;Tianyi XU ,&nbsp;Xurong XIE ,&nbsp;Jin HUANG ,&nbsp;Lun XIE ,&nbsp;Feng TIAN","doi":"10.1016/j.vrih.2024.08.001","DOIUrl":"10.1016/j.vrih.2024.08.001","url":null,"abstract":"<div><div>The global trend of population aging poses significant challenges to society and healthcare systems, particularly because of neurocognitive disorders (NCDs) such as Parkinson's disease (PD) and Alzheimer's disease (AD). In this context, artificial intelligence techniques have demonstrated promising potential for the objective assessment and detection of NCDs. Multimodal contactless screening technologies, such as speech-language processing, computer vision, and virtual reality, offer efficient and convenient methods for disease diagnosis and progression tracking. This paper systematically reviews the specific methods and applications of these technologies in the detection of NCDs using data collection paradigms, feature extraction, and modeling approaches. Additionally, the potential applications and future prospects of these technologies for the detection of cognitive and motor disorders are explored. By providing a comprehensive summary and refinement of the extant theories, methodologies, and applications, this study aims to facilitate an in-depth understanding of these technologies for researchers, both within and outside the field. To the best of our knowledge, this is the first survey to cover the use of speech-language processing, computer vision, and virtual reality technologies for the detection of NSDs.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 421-472"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Previs-Real:Interactive virtual previsualization system for news shooting rehearsal and evaluation
Virtual Reality Intelligent Hardware Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.12.001
Che Qu , Shaocong Wang , Chao Zhou , Tongchen Zhao , Rui Guo , Cheng Wa Wong , Chi Deng , Bin Ji , Yuhui Wen , Yuanchun Shi , Yong-Jin Liu
{"title":"Previs-Real:Interactive virtual previsualization system for news shooting rehearsal and evaluation","authors":"Che Qu ,&nbsp;Shaocong Wang ,&nbsp;Chao Zhou ,&nbsp;Tongchen Zhao ,&nbsp;Rui Guo ,&nbsp;Cheng Wa Wong ,&nbsp;Chi Deng ,&nbsp;Bin Ji ,&nbsp;Yuhui Wen ,&nbsp;Yuanchun Shi ,&nbsp;Yong-Jin Liu","doi":"10.1016/j.vrih.2024.12.001","DOIUrl":"10.1016/j.vrih.2024.12.001","url":null,"abstract":"<div><h3>Background</h3><div>In the demanding field of live news broadcasting, the intricate studio production procedures and tight schedules pose significant challenges for physical rehearsals by cameramen. This paper explores the design and implementation of a lightweight virtual news previsualization system, leveraging virtual production technology and interaction design methods to address the lack of fidelity in presentations and manipulations, and the quantitative feedback of rehearsal effects in previous virtual approaches.</div></div><div><h3>Methods</h3><div>Our system, Previs-Real, is informed by user investigation with professional cameramen and studio technicians, and adheres to principles of high fidelity, accurate replication of actual hardware operations, and real-time feedback on rehearsal results. The system's software and hardware development are implemented based on Unreal Engine and accompanying toolsets, incorporating cutting-edge modeling and camera calibration methods.</div></div><div><h3>Results</h3><div>We validated Previs-Real through a user study, demonstrating superior performance in previsualization shooting tasks using the virtual system compared to traditional camera setups. The findings, supported by both objective performance metrics and subjective responses, underline Previs-Real's effectiveness and potential in transforming news broadcasting rehearsals.</div></div><div><h3>Conclusions</h3><div>Previs-Real eliminates the requirement for complex equipment interconnections and team coordination inherent in a physical studio by implementing methodologies complying the above principles, objectively resulting in a lightweight design of applicable version of virtual news previsualization system. It offers a novel solution to the challenges in news studio previsualization by focusing on key operational features rather than full environment replication. This design approach is equally effective in the process of designing lightweight systems in other fields.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 527-549"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MatStick: Changing the material sensation of objects upon impact
Virtual Reality Intelligent Hardware Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.11.001
Songxian Liu, Jian He, Shengsheng Jiang, Ziyan Zhang, Mengfei Lv
{"title":"MatStick: Changing the material sensation of objects upon impact","authors":"Songxian Liu,&nbsp;Jian He,&nbsp;Shengsheng Jiang,&nbsp;Ziyan Zhang,&nbsp;Mengfei Lv","doi":"10.1016/j.vrih.2024.11.001","DOIUrl":"10.1016/j.vrih.2024.11.001","url":null,"abstract":"<div><div>An increasing number of studies have focused on providing rich tactile feedback in virtual reality interactive scenarios. In this study, we addressed a tapping scenario in virtual reality by designing MatStick, a solution capable of offering diverse tapping sensations. MatStick utilizes a soft physical base to provide force feedback and modulates the instantaneous vibration of the base using a voice coil motor, thereby altering the perception of the base material. We conducted two psychophysical experiments and a subjective evaluation to assess the capabilities of MatStick. The results demonstrate that MatStick can deliver rich tapping sensations. Although users may find it challenging to directly correlate the tapping sensation with the actual physical material based solely on tactile feedback, in immersive scenarios combined with visual and auditory cues, MatStick significantly enhances the user's interaction experience.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 486-501"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InputJump: Augmented reality-facilitated cross-device input fusion based on spatial and semantic information
Virtual Reality Intelligent Hardware Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.10.001
Xin Zeng , Xiaoyu Wang , Tengxiang Zhang , Yukang Yan , Yiqiang Chen
{"title":"InputJump: Augmented reality-facilitated cross-device input fusion based on spatial and semantic information","authors":"Xin Zeng ,&nbsp;Xiaoyu Wang ,&nbsp;Tengxiang Zhang ,&nbsp;Yukang Yan ,&nbsp;Yiqiang Chen","doi":"10.1016/j.vrih.2024.10.001","DOIUrl":"10.1016/j.vrih.2024.10.001","url":null,"abstract":"<div><div>The proliferation of computing devices requires seamless cross-device interactions. Augmented reality (AR) headsets can facilitate interactions with existing computers owing to their user-centered views and natural inputs. In this study, we propose InputJump, a user-centered cross-device input fusion method that maps multi-modal cross-device inputs to interactive elements on graphical interfaces. The input jump calculates the spatial coordinates of the input target positions and the interactive elements within the coordinate system of the AR headset. It also extracts semantic descriptions of inputs and elements using large language models (LLMs). Two types of information from different inputs (e.g., gaze, gesture, mouse, and keyboard) were fused to map onto an interactive element. The proposed method is explained in detail and implemented on both an AR headset and a desktop PC. We then conducted a user study and extensive simulations to validate our proposed method. The results showed that InputJump can accurately associate a fused input with the target interactive element, enabling a more natural and flexible interaction experience.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 502-526"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic piano performance interaction system based on greedy algorithm for dexterous manipulator
Virtual Reality Intelligent Hardware Pub Date : 2024-12-01 DOI: 10.1016/j.vrih.2024.09.001
Yufei Wang , Junfeng Yao , Yalan Zhou , Zefeng Wang
{"title":"Automatic piano performance interaction system based on greedy algorithm for dexterous manipulator","authors":"Yufei Wang ,&nbsp;Junfeng Yao ,&nbsp;Yalan Zhou ,&nbsp;Zefeng Wang","doi":"10.1016/j.vrih.2024.09.001","DOIUrl":"10.1016/j.vrih.2024.09.001","url":null,"abstract":"<div><div>With continuous advancements in artificial intelligence (AI), automatic piano-playing robots have become subjects of cross-disciplinary interest. However, in most studies, these robots served merely as objects of observation with limited user engagement or interaction. To address this issue, we propose a user-friendly and innovative interaction system based on the principles of greedy algorithms. This system features three modules: score management, performance control, and keyboard interactions. Upon importing a custom score or playing a note via an external device, the system performs on a virtual piano in line with user inputs. This system has been successfully integrated into our dexterous manipulator-based piano-playing device, which significantly enhances user interactions.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 6","pages":"Pages 473-485"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143315911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pre-training transformer with dual-branch context content module for table detection in document images 采用双分支上下文内容模块的预训练变换器,用于文档图像中的表格检测
Virtual Reality Intelligent Hardware Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.06.003
Yongzhi Li , Pengle Zhang , Meng Sun , Jin Huang , Ruhan He
{"title":"Pre-training transformer with dual-branch context content module for table detection in document images","authors":"Yongzhi Li ,&nbsp;Pengle Zhang ,&nbsp;Meng Sun ,&nbsp;Jin Huang ,&nbsp;Ruhan He","doi":"10.1016/j.vrih.2024.06.003","DOIUrl":"10.1016/j.vrih.2024.06.003","url":null,"abstract":"<div><h3>Background</h3><div>Document images such as statistical reports and scientific journals are widely used in information technology. Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction. However, because of the diversity in the shapes and sizes of tables, existing table detection methods adapted from general object detection algorithms, have not yet achieved satisfactory results. Incorrect detection results might lead to the loss of critical information.</div></div><div><h3>Methods</h3><div>Therefore, we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections. To better deal with table areas of different shapes and sizes, we added a dual-branch context content attention module (DCCAM) to high-dimensional features to extract context content information, thereby enhancing the network's ability to learn shape features. For feature fusion at different scales, we replaced the original 3×3 convolution with a multilayer residual module, which contains enhanced gradient flow information to improve the feature representation and extraction capability.</div></div><div><h3>Results</h3><div>We evaluated our method on public document datasets and compared it with previous methods, which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score. <span><span>https://github.com/YongZ-Lee/TD-DCCAM</span><svg><path></path></svg></span></div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 408-420"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-salient object detection with iterative purification and predictive optimization 通过迭代净化和预测优化进行共轴物体检测
Virtual Reality Intelligent Hardware Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.06.002
Yang Wen, Yuhuan Wang, Hao Wang, Wuzhen Shi, Wenming Cao
{"title":"Co-salient object detection with iterative purification and predictive optimization","authors":"Yang Wen,&nbsp;Yuhuan Wang,&nbsp;Hao Wang,&nbsp;Wuzhen Shi,&nbsp;Wenming Cao","doi":"10.1016/j.vrih.2024.06.002","DOIUrl":"10.1016/j.vrih.2024.06.002","url":null,"abstract":"<div><h3>Background</h3><div>Co-salient object detection (Co-SOD) aims to identify and segment commonly salient objects in a set of related images. However, most current Co-SOD methods encounter issues with the inclusion of irrelevant information in the co-representation. These issues hamper their ability to locate co-salient objects and significantly restrict the accuracy of detection.</div></div><div><h3>Methods</h3><div>To address this issue, this study introduces a novel Co-SOD method with iterative purification and predictive optimization (IPPO) comprising a common salient purification module (CSPM), predictive optimizing module (POM), and diminishing mixed enhancement block (DMEB).</div></div><div><h3>Results</h3><div>These components are designed to explore noise-free joint representations, assist the model in enhancing the quality of the final prediction results, and significantly improve the performance of the Co-SOD algorithm. Furthermore, through a comprehensive evaluation of IPPO and state-of-the-art algorithms focusing on the roles of CSPM, POM, and DMEB, our experiments confirmed that these components are pivotal in enhancing the performance of the model, substantiating the significant advancements of our method over existing benchmarks. Experiments on several challenging benchmark co-saliency datasets demonstrate that the proposed IPPO achieves state-of-the-art performance.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 396-407"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Music-stylized hierarchical dance synthesis with user control 用户控制的音乐风格化分层舞蹈合成
Virtual Reality Intelligent Hardware Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.06.004
Yanbo Cheng, Yichen Jiang, Yingying Wang
{"title":"Music-stylized hierarchical dance synthesis with user control","authors":"Yanbo Cheng,&nbsp;Yichen Jiang,&nbsp;Yingying Wang","doi":"10.1016/j.vrih.2024.06.004","DOIUrl":"10.1016/j.vrih.2024.06.004","url":null,"abstract":"<div><h3>Background</h3><div>Synthesizing dance motions to match musical inputs is a significant challenge in animation research. Compared to functional human motions, such as locomotion, dance motions are creative and artistic, often influenced by music, and can be independent body language expressions. Dance choreography requires motion content to follow a general dance genre, whereas dance performances under musical influence are infused with diverse impromptu motion styles. Considering the high expressiveness and variations in space and time, providing accessible and effective user control for tuning dance motion styles remains an open problem.</div></div><div><h3>Methods</h3><div>In this study, we present a hierarchical framework that decouples the dance synthesis task into independent modules. We use a high-level choreography module built as a Transformer-based sequence model to predict the long-term structure of a dance genre and a low-level realization module that implements dance stylization and synchronization to match the musical input or user preferences. This novel framework allows the individual modules to be trained separately. Because of the decoupling, dance composition can fully utilize existing high-quality dance datasets that do not have musical accompaniments, and the dance implementation can conveniently incorporate user controls and edit motions through a decoder network. Each module is replaceable at runtime, which adds flexibility to the synthesis of dance sequences.</div></div><div><h3>Results</h3><div>Synthesized results demonstrate that our framework generates high-quality diverse dance motions that are well adapted to varying musical conditions and user controls.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 339-357"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models 网格表示很重要:研究不同网格特征对深度三维可变形模型的感知和空间保真度的影响
Virtual Reality Intelligent Hardware Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.08.006
Robert KOSK , Richard SOUTHERN , Lihua YOU , Shaojun BIAN , Willem KOKKE , Greg MAGUIRE
{"title":"Mesh representation matters: investigating the influence of different mesh features on perceptual and spatial fidelity of deep 3D morphable models","authors":"Robert KOSK ,&nbsp;Richard SOUTHERN ,&nbsp;Lihua YOU ,&nbsp;Shaojun BIAN ,&nbsp;Willem KOKKE ,&nbsp;Greg MAGUIRE","doi":"10.1016/j.vrih.2024.08.006","DOIUrl":"10.1016/j.vrih.2024.08.006","url":null,"abstract":"<div><h3>Background</h3><div>Deep 3D morphable models (deep 3DMMs) play an essential role in computer vision. They are used in facial synthesis, compression, reconstruction and animation, avatar creation, virtual try-on, facial recognition systems and medical imaging. These applications require high spatial and perceptual quality of synthesised meshes. Despite their significance, these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.</div></div><div><h3>Methods</h3><div>We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes. This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with <span><math><mrow><msub><mi>L</mi><mn>1</mn></msub></mrow></math></span> and <span><math><mrow><msub><mi>L</mi><mn>2</mn></msub></mrow></math></span> norm metrics and underperforms on perceptual metrics. In contrast, using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error. The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.</div></div><div><h3>Results</h3><div>The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 383-395"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CURDIS: A template for incremental curve discretization algorithms and its application to conics CURDIS:增量曲线离散化算法模板及其在圆锥曲线中的应用
Virtual Reality Intelligent Hardware Pub Date : 2024-10-01 DOI: 10.1016/j.vrih.2024.06.005
Philippe Latour, Marc Van Droogenbroeck
{"title":"CURDIS: A template for incremental curve discretization algorithms and its application to conics","authors":"Philippe Latour,&nbsp;Marc Van Droogenbroeck","doi":"10.1016/j.vrih.2024.06.005","DOIUrl":"10.1016/j.vrih.2024.06.005","url":null,"abstract":"<div><div>We introduce CURDIS, a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc. In this template, algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion. These two elements can be adapted for any type of curve, leading to algorithms dedicated to the shape of specific curves. While the calculation of the tangent quadrant for various curves, such as lines, conics, or cubics, is simple, it is more complex to analyze how pixels are traversed by the curve. In the case of conic arcs, we found a criterion for determining the pixel exit side. This leads us to present a new algorithm, called CURDIS-C, specific to the discretization of conics, for which we provide all the details. Surprisingly, the criterion for conics requires between one and three sign tests and four additions per pixel, making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations. Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel, achieving this generality at the cost of potentially computing up to two square roots per arc. We illustrate the use of CURDIS for the discretization of different curves, such as ellipses, hyperbolas, and parabolas, even when they degenerate into lines or corners.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":"6 5","pages":"Pages 358-382"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信