IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Multi-View Large Reconstruction Model via Geometry-Aware Positional Encoding and Attention. 基于几何感知位置编码和注意的多视图大重构模型。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-23 DOI: 10.1109/TVCG.2025.3572341
Mengfei Li, Xiaoxiao Long, Yixun Liang, Weiyu Li, Yuan Liu, Peng Li, Wenhan Luo, Wenping Wang, Yike Guo
{"title":"Multi-View Large Reconstruction Model via Geometry-Aware Positional Encoding and Attention.","authors":"Mengfei Li, Xiaoxiao Long, Yixun Liang, Weiyu Li, Yuan Liu, Peng Li, Wenhan Luo, Wenping Wang, Yike Guo","doi":"10.1109/TVCG.2025.3572341","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3572341","url":null,"abstract":"<p><p>Despite recent advancements in the Large Reconstruction Model (LRM) demonstrating impressive results, when extending its input from single image to multiple images, it exhibits inefficiencies, subpar geometric and texture quality, as well as slower convergence speed than expected. It is attributed to that, LRM formulates 3D reconstruction as a naive images-to-3D translation problem, ignoring the strong 3D coherence among the input images. In this paper, we propose a Multi-view Large Reconstruction Model (M-LRM) designed to reconstruct high-quality 3D shapes from multi-views in a 3D-aware manner. Specifically, we introduce a multi-view consistent cross-attention scheme to enable M-LRM to accurately query information from the input images. Moreover, we employ the 3D priors of the input multi-view images to initialize the triplane tokens. Compared to previous methods, the proposed M-LRM can generate 3D shapes of high fidelity. Experimental studies demonstrate that our model achieves a significant performance gain and faster training convergence. Project page: https://murphylmf.github.io/M-LRM/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144133236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Into the Void: Mapping the Unseen Gaps in High Dimensional Data. 进入虚空:映射高维数据中看不见的空隙。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-23 DOI: 10.1109/TVCG.2025.3572850
Xinyu Zhang, Tyler Estro, Geoff Kuenning, Erez Zadok, Klaus Mueller
{"title":"Into the Void: Mapping the Unseen Gaps in High Dimensional Data.","authors":"Xinyu Zhang, Tyler Estro, Geoff Kuenning, Erez Zadok, Klaus Mueller","doi":"10.1109/TVCG.2025.3572850","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3572850","url":null,"abstract":"<p><p>We present a comprehensive pipeline, integrated with a visual analytics system called GapMiner, capable of exploring and exploiting untapped opportunities within the empty regions of high-dimensional datasets. Our approach utilizes a novel Empty-Space Search Algorithm (ESA) to identify the center points of these uncharted voids, which represent reservoirs for potentially valuable new configurations. Initially, this process is guided by user interactions through GapMiner, which visualizes Empty-Space Configurations (ESCs) within the context of the dataset and allows domain experts to explore and refine ESCs for subsequent validation in domain experiments or simulations. These activities iteratively enhance the dataset and contribute to training a connected deep neural network (DNN). As training progresses, the DNN gradually assumes the role of identifying and validating high-potential ESCs, reducing the need for direct user involvement. Once the DNN achieves sufficient accuracy, it autonomously guides the exploration of optimal configurations by predicting performance and refining configurations through a combination of gradient ascent and improved empty-space searches. Domain experts were actively involved throughout the system's development. Our findings demonstrate that this methodology consistently generates superior novel configurations compared to conventional randomization-based approaches. We illustrate its effectiveness in multiple case studies with diverse objectives.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144133227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Draws Your Attention First? An Attention Prediction Model Based on Spatial Features in Virtual Reality. 什么最先吸引你的注意力?虚拟现实中基于空间特征的注意力预测模型。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-22 DOI: 10.1109/TVCG.2025.3572408
Matthew S Castellana, Ping Hu, Doris Gutierrez, Arie E Kaufman
{"title":"What Draws Your Attention First? An Attention Prediction Model Based on Spatial Features in Virtual Reality.","authors":"Matthew S Castellana, Ping Hu, Doris Gutierrez, Arie E Kaufman","doi":"10.1109/TVCG.2025.3572408","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3572408","url":null,"abstract":"<p><p>Understanding visual attention is key to designing efficient human-computer interaction, especially for virtual reality (VR) and augmented reality (AR) applications. However, the relationship between 3D spatial attributes of visual stimuli and visual attention is still underexplored. Thus, we design an experiment to collect a gaze dataset in VR, and use it to quantitatively model the probability of first attention between two stimuli. First, we construct the dataset by presenting subjects with a synthetic VR scene containing varying spatial configurations of two spheres. Second, we formulate their selective attention based on a probability model that takes as input two view-specific stimuli attributes: their eccentricities in the field of view and their sizes as visual angles. Third, we train two models using our gaze dataset to predict the probability distribution of a user's preferences of visual stimuli within the scene. We evaluate our method by comparing model performance across two challenging synthetic scenes in VR. Our application case study demonstrates that VR designers can utilize our models for attention prediction in two-foreground-object scenarios, which are common when designing 3D content for storytelling or scene guidance. We make the dataset and the source code to visualize it available alongside this work.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Typology of Decision-Making Tasks for Visualization. 可视化决策任务的类型学。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-22 DOI: 10.1109/TVCG.2025.3572842
Camelia D Brumar, Sam Molnar, Gabriel Appleby, Kristi Potter, Remco Chang
{"title":"A Typology of Decision-Making Tasks for Visualization.","authors":"Camelia D Brumar, Sam Molnar, Gabriel Appleby, Kristi Potter, Remco Chang","doi":"10.1109/TVCG.2025.3572842","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3572842","url":null,"abstract":"<p><p>Despite decision-making being a vital goal of data visualization, little work has been done to differentiate decision-making tasks within the field. While visualization task taxonomies and typologies exist, they often focus on more granular analytical tasks that are too low-level to describe large complex decisions, which can make it difficult to reason about and design decision-support tools. In this paper, we contribute a typology of decision-making tasks that were iteratively refined from a list of design goals distilled from a literature review. Our typology is concise and consists of only three tasks: CHOOSE, ACTIVATE, and CREATE. Although decision types originating in other disciplines exist, we provide definitions for these tasks that are suitable for the visualization community. Our proposed typology offers two benefits. First, the ability to compose and hierarchically organize the tasks enables flexible and clear descriptions of decisions with varying levels of complexities. Second, the typology encourages productive discourse between visualization designers and domain experts by abstracting the intricacies of data, thereby promoting clarity and rigorous analysis of decision-making processes. We demonstrate the benefits of our typology through four case studies, and present an evaluation of the typology from semi-structured interviews with experienced members of the visualization community who have contributed to developing or publishing decision support systems for domain experts. Our interviewees used our typology to delineate the decision-making processes supported by their systems, demonstrating its descriptive capacity and effectiveness. Finally, we present preliminary findings on the usefulness of our typology for visualization design.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MixRF: Universal Mixed Radiance Fields with Points and Rays Aggregation. MixRF:具有点和射线聚合的通用混合辐射场。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-20 DOI: 10.1109/TVCG.2025.3572015
Haiyang Bai, Tao Lu, Jiaqi Zhu, Wei Huang, Chang Gou, Jie Guo, Lijun Chen, Yanwen Guo
{"title":"MixRF: Universal Mixed Radiance Fields with Points and Rays Aggregation.","authors":"Haiyang Bai, Tao Lu, Jiaqi Zhu, Wei Huang, Chang Gou, Jie Guo, Lijun Chen, Yanwen Guo","doi":"10.1109/TVCG.2025.3572015","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3572015","url":null,"abstract":"<p><p>Recent advancements in neural rendering methods, such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3D-GS), have significantly revolutionized photo-realistic novel view synthesis of scenes with multiple photos or videos as input. However, existing approaches within the NeRF and 3D-GS frameworks often assume the independence of point sampling and ray casting, which are intrinsic to volume rendering and alpha-blending techniques. These underlying assumptions limit the ability to aggregate context within subspaces, such as densities and colors in the radiance fields and pixels on the image plane, leading to synthesized images that lack fine details and smoothness. To overcome this, we propose a universal framework, MixRF, comprising a Radiance Field Mixer (RF-mixer) and a Color Domain Mixer (CD-mixer), to sufficiently aggregate and fully explore information in neighboring sampled points and casting rays, separately. The RF-mixer treats sampled points as an explicit point cloud, enabling the aggregation of density and color attributes from neighboring points to better capture local geometry and appearance. Meanwhile, the CD-mixer rearranges rendered pixels on the sub-image plane, improving smoothness and recovering fine details and textures. Both mixers employ a kernel-based mixing strategy to facilitate effective and controllable attribute aggregation, ensuring a more comprehensive exploration of radiance values and pixel information. Extensive experiments demonstrate that our MixRF framework is compatible with radiance field-based methods, including NeRF and 3D-GS designs. The proposed framework dramatically enhances performance in both qualitative and quantitative evaluations, with less than a $ 25%$ increase in computational overhead during inference.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Intersection-free Remeshing of Triangular Meshes. 三角形网格的快速无交叉重划分。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-20 DOI: 10.1109/TVCG.2025.3569926
Taoran Liu, Hongfei Ye, Jianjun Chen
{"title":"Fast Intersection-free Remeshing of Triangular Meshes.","authors":"Taoran Liu, Hongfei Ye, Jianjun Chen","doi":"10.1109/TVCG.2025.3569926","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3569926","url":null,"abstract":"<p><p>We propose a fast intersection-free remeshing of triangular meshes that robustly and efficiently generates high-quality non-intersecting meshes. Conducting intersection checks on all local operations during remeshing to prevent intersections represents the principal efficiency bottleneck. Our method is based on a key observation: intersections primarily occur in structurally complex regions. Accordingly, we develop an adaptive method to identify these key regions and perform intersection checks only for local operations within these regions during remeshing, significantly improving the algorithmic efficiency. Our method is an order of magnitude faster than traditional approaches that perform intersection checks on all local operations. Furthermore, we introduce a flip-aware extension mechanism that effectively avoids triangle flipping by constraining the optimization space of local operations, thereby avoiding the formation of irregular sharp edges. We also employ an adaptive iterative size field to eliminate banding phenomenon and propose a quasi-geometric size field adjustment method to quickly achieve smooth size transitions, thereby improving mesh quality. Compared to state-of-the-art methods, our method consistently and quickly generates higher quality non-intersecting meshes. In addition, we have validated the robustness and efficiency of our method, using all 5,469 non-intersecting valid models from the Thingi10K dataset.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Rendering of Relightable and Animatable Gaussian Avatars. 可重亮和可动画的高斯头像的交互渲染。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-19 DOI: 10.1109/TVCG.2025.3569923
Youyi Zhan, Tianjia Shao, He Wang, Yin Yang, Kun Zhou
{"title":"Interactive Rendering of Relightable and Animatable Gaussian Avatars.","authors":"Youyi Zhan, Tianjia Shao, He Wang, Yin Yang, Kun Zhou","doi":"10.1109/TVCG.2025.3569923","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3569923","url":null,"abstract":"<p><p>Creating relightable and animatable avatars from multi-view or monocular videos is a challenging task for digital human creation and virtual reality applications. Previous methods rely on neural radiance fields or ray tracing, resulting in slow training and rendering processes. By utilizing Gaussian Splatting, we propose a simple and efficient method to decouple body materials and lighting from sparse-view or monocular avatar videos, so that the avatar can be rendered simultaneously under novel viewpoints, poses, and lightings at interactive frame rates (6.9 fps). Specifically, we first obtain the canonical body mesh using a signed distance function and assign attributes to each mesh vertex. The Gaussians in the canonical space then interpolate from nearby body mesh vertices to obtain the attributes. We subsequently deform the Gaussians to the posed space using forward skinning, and combine the learnable environment light with the Gaussian attributes for shading computation. To achieve fast shadow modeling, we rasterize the posed body mesh from dense viewpoints to obtain the visibility. Our approach is not only simple but also fast enough to allow interactive rendering of avatar animation under environmental light changes. Experiments demonstrate that, compared to previous works, our method can render higher quality results at a faster speed on both synthetic and real datasets.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDA-Net: A global feature point cloud completion network based on serialization and dual attention. SDA-Net:一种基于序列化和双重关注的全局特征点云补全网络。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-19 DOI: 10.1109/TVCG.2025.3571467
Weichao Wu, Yongyang Xu, Zhong Xie
{"title":"SDA-Net: A global feature point cloud completion network based on serialization and dual attention.","authors":"Weichao Wu, Yongyang Xu, Zhong Xie","doi":"10.1109/TVCG.2025.3571467","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3571467","url":null,"abstract":"<p><p>Point cloud completion is essential for restoring 3D geometric data lost due to occlusions or sensor limitations. Existing methods often rely on k-nearest neighbor(KNN)-based local feature extraction, which focuses on neighborhoods around central points while neglecting critical global structural information. Additionally, Transformer-based approaches, while effective at modeling global context, typically use central point feature sequences to reduce computational complexity. This windowed attention strategy compromises the preservation of global context, leading to incomplete modeling of the point cloud's overall structure. To address these challenges, we propose SDA-Net, a dual-attention point cloud completion network utilizing multiple serialization strategies. These strategies transform unordered point clouds into structured sequences, enabling comprehensive modeling of inter-point relationships. Additionally, the dual-attention mechanism enhances global feature extraction through complementary spatial and channel-wise self-attention, effectively compensating for the loss of global context. Extensive experiments demonstrate that SDA-Net achieves state-of-the-art performance, including an average Chamfer Distance (CD) of 6.48 on the PCN dataset. Furthermore, it excels in real-world applications, accurately reconstructing fine-grained details in LiDAR-scanned point clouds. The source code is available at https://github.com/Hibiki-Ula/SDA-Net.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Potential Field Method for Tooth Motion Planning in Orthodontic Treatment. 正畸治疗中牙齿运动规划的势场法。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-19 DOI: 10.1109/TVCG.2025.3567299
Yumeng Liu, Yuexin Ma, Lei Yang, Congyi Zhang, Guangshun Wei, Runnan Chen, Min Gu, Jia Pan, Zhengbao Yang, Taku Komura, Shiqing Xin, Yuanfeng Zhou, Changhe Tu, Wenping Wang
{"title":"A Potential Field Method for Tooth Motion Planning in Orthodontic Treatment.","authors":"Yumeng Liu, Yuexin Ma, Lei Yang, Congyi Zhang, Guangshun Wei, Runnan Chen, Min Gu, Jia Pan, Zhengbao Yang, Taku Komura, Shiqing Xin, Yuanfeng Zhou, Changhe Tu, Wenping Wang","doi":"10.1109/TVCG.2025.3567299","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3567299","url":null,"abstract":"<p><p>Invisible orthodontics, commonly known as clear alignment treatment, offers a more comfortable and aesthetically pleasing alternative in orthodontic care, attracting considerable attention in the dental community in recent years. It replaces conventional metal braces with a series of removable, and transparent aligners. Each aligner is crafted to facilitate a gradual adjustment of the teeth, ensuring progressive stages of dental correction. This necessitates the design for teeth motion. Here we present an automatic method and a system for generating collision-free teeth motion planning while avoiding gaps between adjacent teeth, which is unacceptable in clinical practice. To tackle this task, we formulate it as a constrained optimization problem and utilize the interior point method for its solution. We also developed an interactive system that enables dentists to easily visualize and edit the paths. Our method significantly speeds up the clear aligner planning process, creating the desired motion paths for a full set of teeth in under five minutes-a task that typically requires several hours of manual work. Our experiments and user studies confirm the effectiveness of this method in planning teeth movement, showcasing its potential to streamline orthodontic procedures.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Evaluation of a 6-DoF Wearable Fingertip Device for Haptic Shape Rendering. 一种用于触觉形状绘制的六自由度可穿戴指尖装置的设计与评价。
IEEE transactions on visualization and computer graphics Pub Date : 2025-05-19 DOI: 10.1109/TVCG.2025.3571705
Dapeng Chen, Da Yu, Yi Ding, Haojun Ni, Lifeng Zhu, Hong Zeng, Zhong Wei, Jia Liu, Aiguo Song
{"title":"Design and Evaluation of a 6-DoF Wearable Fingertip Device for Haptic Shape Rendering.","authors":"Dapeng Chen, Da Yu, Yi Ding, Haojun Ni, Lifeng Zhu, Hong Zeng, Zhong Wei, Jia Liu, Aiguo Song","doi":"10.1109/TVCG.2025.3571705","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3571705","url":null,"abstract":"<p><p>As virtual objects contain increasingly rich attribute information, small wearable fingertip devices need to have higher degrees of freedom (DoFs) to convey the haptic sensation of virtual objects. In order to effectively display the shape features of virtual objects to users through curvature, we designed a 6-DoF wearable fingertip device (WFD). This WFD combines a 6-DoF Stewart parallel mechanism, consisting of a static platform and a mobile platform connected by six revolute-spherical-spherical kinematic chains. The translation and rotation of the mobile platform are driven by six miniature servo motors, which can simulate haptic sensations such as making and breaking contact, sliding, and skin stretch when the fingertip interacts with a virtual surface. The WFD is fixed at the user's dominant index finger using hook-and-loop fasteners, with a size of 68$times 59times$56 mm$^{3}$ and a mass of 45.5 g. We analyzed and validated the kinematic model of the WFD and tested its force output capability. Finally, we invited 15 adults to conduct three subjective perception experiments to evaluate the performance of the WFD in curvature perception and shape display. The experimental results show that: (1) the just noticeable difference (JND) for curvature identification using the WFD is 3.02$pm$0.23 m$^{-1}$; (2) The 6-DoF haptic feedback provided by the WFD improves the accuracy of curved surface recognition from 53.4$pm$7.1% in 3-DoF to 72.0$pm$5.9%; (3) Even without visual feedback, the shape recognition accuracy of the WFD when combined with the Touch device reaches 82.3$pm$8.2%. Experimental results show that the WFD has good performance and potential in curvature perception and shape display.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信