{"title":"Function-Based Haptic Interaction in Cyberworlds","authors":"Lei Wei, A. Sourin","doi":"10.1109/CW.2011.19","DOIUrl":null,"url":null,"abstract":"Polygon and point based models dominate in virtual reality. These models also affect haptic rendering algorithms which are often based on collision with polygons. We use mathematical functions to define and implement geometry (curves, surfaces and solid objects), visual appearance (3D colors and geometric textures) and various tangible physical properties (elasticity, friction, viscosity, and force fields). The function definitions are given as analytical formulas (explicit, implicit and parametric), function scripts and procedures. Since the defining functions are very small we can efficiently use them in collaborative virtual environments to exchange between the participating clients. We proposed an algorithm for haptic rendering of virtual scenes including mutually penetrating objects with different sizes and arbitrary location of the observer without a prior knowledge of the scene to be rendered. The algorithm is based on casting multiple haptic rendering rays from the Haptic Interaction Point (HIP), and it builds a stack to keep track on all colliding objects with the HIP. The algorithm uses collision detection based on implicit function representation of the object surfaces. The proposed approach allows us to be flexible when choosing the actual rendering platform. The function-defined objects and parts constituting them can be used together with other common definitions of virtual objects such as polygon meshes, point sets, voxel volumes, etc. We implemented an extension of X3D and VRML which allows for defining complex geometry, appearance and haptic effects in virtual scenes by functions and common polygon-based models, with various object sizes, mutual penetrations, arbitrary location of the observer and variable precision.","PeriodicalId":231796,"journal":{"name":"2011 International Conference on Cyberworlds","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 International Conference on Cyberworlds","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CW.2011.19","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Polygon and point based models dominate in virtual reality. These models also affect haptic rendering algorithms which are often based on collision with polygons. We use mathematical functions to define and implement geometry (curves, surfaces and solid objects), visual appearance (3D colors and geometric textures) and various tangible physical properties (elasticity, friction, viscosity, and force fields). The function definitions are given as analytical formulas (explicit, implicit and parametric), function scripts and procedures. Since the defining functions are very small we can efficiently use them in collaborative virtual environments to exchange between the participating clients. We proposed an algorithm for haptic rendering of virtual scenes including mutually penetrating objects with different sizes and arbitrary location of the observer without a prior knowledge of the scene to be rendered. The algorithm is based on casting multiple haptic rendering rays from the Haptic Interaction Point (HIP), and it builds a stack to keep track on all colliding objects with the HIP. The algorithm uses collision detection based on implicit function representation of the object surfaces. The proposed approach allows us to be flexible when choosing the actual rendering platform. The function-defined objects and parts constituting them can be used together with other common definitions of virtual objects such as polygon meshes, point sets, voxel volumes, etc. We implemented an extension of X3D and VRML which allows for defining complex geometry, appearance and haptic effects in virtual scenes by functions and common polygon-based models, with various object sizes, mutual penetrations, arbitrary location of the observer and variable precision.
基于多边形和点的模型在虚拟现实中占主导地位。这些模型也会影响基于多边形碰撞的触觉渲染算法。我们使用数学函数来定义和实现几何(曲线、表面和固体物体)、视觉外观(3D颜色和几何纹理)和各种有形的物理特性(弹性、摩擦、粘度和力场)。函数定义以解析公式(显式、隐式和参数式)、函数脚本和程序的形式给出。由于定义函数非常小,我们可以在协作虚拟环境中有效地使用它们来在参与的客户端之间进行交换。本文提出了一种虚拟场景的触觉渲染算法,该算法包括不同大小的相互穿透物体和观察者的任意位置,而不需要预先知道要渲染的场景。该算法基于从触觉交互点(haptic Interaction Point, HIP)投射多个触觉渲染光线,并建立一个堆栈来跟踪所有与HIP碰撞的物体。该算法基于物体表面的隐式函数表示进行碰撞检测。所提出的方法使我们在选择实际呈现平台时更加灵活。函数定义的对象及其组成部分可以与其他常见的虚拟对象定义(如多边形网格、点集、体素体积等)一起使用。我们实现了X3D和VRML的扩展,它允许在虚拟场景中通过函数和常见的基于多边形的模型定义复杂的几何形状、外观和触觉效果,具有各种对象大小、相互穿透、观察者的任意位置和可变精度。