{"title":".NET API Wrapping for Existing C++ Haptic APIs","authors":"Z. Mahboubi, S. Clarke","doi":"10.1109/HAVE.2006.283806","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283806","url":null,"abstract":"For a long time haptic devices were expensive and therefore only accessible to a specialized community. But with companies like Novint Technologies introducing a peripheral intended to sell for about US$100, haptic devices can be expected to be affordable for a wider public. But considering that most haptic APIs are in C++, a language intended for expert programmers, novice programmers wanting to program haptic devices would face a steep learning curve. However, if the APIs were to be usable from within the .NET framework, it would allow the more novice users to program using over 20 programming languages and extensive programming solutions and therefore they would be able to easily and efficiently develop software with haptic capabilities. This paper presents a set of guidelines for a design architecture that would allow migrating an existing C++ API to the .NET framework without having to rewrite it from scratch. The presented architecture was implemented by wrapping the Sensable Ghost SDK 3.0. It was then used in both software and hardware based scenarios","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126263533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finger inverse kinematics using error model analysis for gesture enabled navigation in virtual environments","authors":"A. El-Sawah, N. Georganas, E. Petriu","doi":"10.1109/HAVE.2006.283786","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283786","url":null,"abstract":"In this paper we provide a new method for solving the hand fingers inverse kinematics problem. Given the finger's end-effector position with respect to the finger's metacarpal joint, the finger's four degrees of freedom joint angles are uniquely solved directly without iterations. The solution of a closely related, simpler inverse kinematics problem is used as a rough estimate of the finger's MCP and abduction angles. The error model of the estimate is used to correct the prediction. The error analysis is done a priori and is used directly in real-time. The method provides accurate results and is computationally efficient","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130607214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-situated vibrotactile force feedback and laparoscopy performance","authors":"Hao Xin, C. M. Burns, J. Zelek","doi":"10.1109/HAVE.2006.283802","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283802","url":null,"abstract":"Sensory substitution cues have shown to enable force feedback in laparoscopic surgery. However, the sensory cues have been mostly visual, while tactile cues are largely ignored in the context of laparoscopic surgery. Vibrotactile force feedback cues implemented using pancake motors activated at predetermined force levels is tested in this study. Preliminary results show that tactile cues could potentially reduce the incidences of excessive use of force compared to providing visual or visual and tactile cues. However further study is needed to assess the effectiveness of tactile cues","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"70 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132679780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effectiveness of a vibro-tactile feedback to cue a stepping response to a balance challenge.","authors":"F. Asseman, A. Bronstein, M. Gresty","doi":"10.1109/HAVE.2006.283797","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283797","url":null,"abstract":"Our purpose was to evaluate vibro-tactile feedback in cueing the ecologically important manoeuvre of making a `saving' step response to movement of the support surface. Initial experiments to develop this technique were aimed at optimization of the type of transducer used to detect balance's threat and of its sitting on the body to provide the most appropriate feedback of imbalance. A transient movement of a support platform was used to produce perturbations that would provoke a stepping response. Results on normal subjects and a range of patients with balance disorders are relatively contradictory. Whereas elderly subject with slower reaction times improved their reaction with the vibrotactile feedback patients with slowness showed no improvement. We speculate that the mode of action of such a prosthesis is not to improve sensory feedback detection but to facilitate high level decisional process. It is likely that in order to obtain the time lead necessary for sensory substitution one would have to develop a means of predicting when balance is likely to be jeopardized","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133896382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stella M. Clarke, G. Schillhuber, M. F. Zaeh, Heinz Ulbrich
{"title":"Telepresence across delayed networks: a combined prediction and compression approach","authors":"Stella M. Clarke, G. Schillhuber, M. F. Zaeh, Heinz Ulbrich","doi":"10.1109/HAVE.2006.283795","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283795","url":null,"abstract":"The remote nature of telepresence scenarios can be seen as a strongpoint and also as a weakness. Although it enables the remote control of robots in dangerous or inaccessible environments, it necessarily involves some kind of communication mechanism for the transmission of control signals. This communication mechanism necessarily involves adverse network effects such as delay. Three mechanisms aimed at improving the effects of network delay are presented in this paper: (1) Motion prediction to partially compensate for network delays, (2) Force prediction to learn a local force model, thereby reducing dependency on delayed force signals, and (3) Haptic data compression to reduce the required bandwidth of high frequency data. The utilised motion prediction scheme was shown to improve operator performance, but had no influence on operator immersion. The force prediction decreased the deviation between the delayed and the expected forces, thereby stabilising the control loop. The developed haptic data compression scheme reduced the number of packets sent across the network by 86%","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123563592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Model Creation Using Self-Identifying Markers and SIFT Keypoints","authors":"M. Fiala, Chang Shu","doi":"10.1109/HAVE.2006.283776","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283776","url":null,"abstract":"3D object modeling can be accomplished using fiducial markers and/or feature detectors. Fiducial markers provide high reliability of detection, however, it is undesirable to cover an object to be modeled with markers. Feature detectors can find correspondences between images but they cannot always be relied on to be usable for camera localization. A method is shown that uses the strengths of both to automatically create 3D models of object as well as simultaneously calibrating the camera. Self-identifying fiducial markers are used in arrays to localize the camera pose for each image and SIFT features are used to find and match object features between images. Tetrahedrons formed by Delaunay triangulation of the 3D SIFT points are carved to the model. A system is shown where 3D models are generated automatically of an object placed on a marker array simply by capturing a set of images from uncontrolled locations from a camera with unknown intrinsic parameters","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127180531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precise Positioning in a Telepresent Microassembly System","authors":"M. Zaeh, A. Reiter","doi":"10.1109/HAVE.2006.283788","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283788","url":null,"abstract":"The implementation of telepresence technology brings promising advantages for numerous technical systems. In the field of manual microassembly, where low batches are still handmade with microscopes and tweezers it offers ergonomic improvements and new production scenarios, since the worker is separated from the production environment and only connected via networks. Although this is very useful it also provides a thread in that the employed network could introduce delays into the system. Such delays negatively affect the precision of micro production. This research aims to show the extent to which delays can influence tasks which involve precision in a real microassembly setup and scenario","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127396713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sylvie Noël, S. Dumoulin, Thomas Whalen, John Stewart
{"title":"Recognizing Emotions on Static and Animated Avatar Faces","authors":"Sylvie Noël, S. Dumoulin, Thomas Whalen, John Stewart","doi":"10.1109/HAVE.2006.283772","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283772","url":null,"abstract":"Participants were shown static or animated versions of a FACS-compliant avatar face in the work of P. Ekman and W.V. Friesen (1978), and asked to identify the emotion that the face was displaying. In the first version of the face, happiness, sadness, and surprise were all recognized at high rates (80% or more) whatever the stimulus type, while anger and disgust had low recognition rates. The neutral face was not well recognized when viewed as a static image, but was recognized significantly more often when animated. In a second experiment, small changes made to \"tweak\" the neutral and angry faces were only partially successful. About half the people recognized the static angry face; far fewer recognized the animated version; and most people wrongly identified the neutral face, both in its static and its animated version. More surprisingly, the recognition rates for happiness, sadness and surprise dropped significantly during the second experiment, for both the static and the animated faces. This may be due to changes in the way the stimuli were presented between the first and the second experiment. These results suggest that people are sensitive to small, seemingly innocuous changes in the presentation of avatar faces","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129172029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Cloth Design System Using Haptic Device and Its Collaborative Environment","authors":"K. Miyahara, Y. Okada, K. Niijima","doi":"10.1109/HAVE.2006.283789","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283789","url":null,"abstract":"This paper proposes a cloth design system that provides intuitive operations, e.g., sewing, cutting and fitting a cloth in a virtual 3D space through direct manipulations using a force-feedback device. This cloth design system also provides a collaborative environment that allows two users to design a common cloth collaboratively in a virtual 3D space through the Internet. A lot of cloth simulation algorithms and systems have been proposed and existed so far. However, there is no cloth design system that supports a force-feedback device and provides a networked-collaborative environment. So, the authors developed such a cloth design system. This paper describes what kinds of intuitive operations are implemented, how the collaborative environment is designed, and quantitative performances of the system to clarify its usefulness","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123079074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recordable Haptic textures","authors":"Harish Vasudevan, M. Manivannan","doi":"10.1109/HAVE.2006.283779","DOIUrl":"https://doi.org/10.1109/HAVE.2006.283779","url":null,"abstract":"In this paper we present a method to record the surface texture of real life objects like metal files, sandpaper etc. These textures can subsequently be played back on virtual surfaces. Our method has the advantage that it can record textures using commonly available haptic hardware. We use the 3DOF SensAble PHANToM to record the textures. The algorithm involves creating recordings of the frequency content of a real surface, by exploring it with a haptic device. We estimate the frequency spectra at two different velocities, and subsequently interpolate between them on a virtual surface. The extent of correlation between real and simulated spectra was estimated and a near exact spectral match was obtained. The simulated texture was played back using the same haptic device. The algorithm to record and playback textures is simple and can be easily implemented for planar surfaces with uniform textures","PeriodicalId":365320,"journal":{"name":"2006 IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE 2006)","volume":"61 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122389918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}