Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry最新文献

筛选
英文 中文
A new cognition-based chat system for avatar agents in virtual space 一种新的基于认知的虚拟空间角色代理聊天系统
Soo-Hyun Park, Seung-Hyun Ji, Dong-Sung Ryu, Hwan-Gue Cho
{"title":"A new cognition-based chat system for avatar agents in virtual space","authors":"Soo-Hyun Park, Seung-Hyun Ji, Dong-Sung Ryu, Hwan-Gue Cho","doi":"10.1145/1477862.1477879","DOIUrl":"https://doi.org/10.1145/1477862.1477879","url":null,"abstract":"Short internet-based chat is typical of modern communication, especially for on-line game players. Improved chat tools are available to avatar agents in virtual space (e.g., Second Life) thanks to the fast-evolving 3D internet. The most common plain text chat is easy to use, but it is hard to understand who talks to whom. Advanced chat with 2D word balloons in 3D virtual space is difficult or unfamiliar, since it does not consider real world constraints. We propose a human-cognition based chat system for virtual avatars using geometric information. In our chat framework, if an agent (avatar) wants to talk with other people, then that agent should approach within an \"audible distance\" to read the text, as would be required in the real world to participate in a conversation. Additionally we propose a new model to manage the virtual chat social network over our cognition-based chatting system. Our experiment showed that the number of chat agents is saturated regardless of the number of chat agents, as occurs in real-world chat. Thus, the social-graph for chat agents is always of a manageable size in our cognition-based chat system.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124885792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Large area robust hybrid tracking with life-size avatar in mixed reality environment: for cultural and historical installation 在混合现实环境中与真人大小的化身进行大面积鲁棒混合跟踪:用于文化和历史装置
W. Pensyl, D. Jernigan, T. Qui, Hsin Pei Fang, Shang Ping Lee
{"title":"Large area robust hybrid tracking with life-size avatar in mixed reality environment: for cultural and historical installation","authors":"W. Pensyl, D. Jernigan, T. Qui, Hsin Pei Fang, Shang Ping Lee","doi":"10.1145/1477862.1477874","DOIUrl":"https://doi.org/10.1145/1477862.1477874","url":null,"abstract":"We have developed a system which enables us to track participant-observers accurately in a large area for the purpose of immersing them in a mixed reality environment. This system is robust even under uncompromising lighting conditions. Accurate tracking of the observer's spatial and orientation point of view is achieved by using hybrid inertial sensors and computer vision techniques. We demonstrate our results by presenting a life-size, animated human avatar sitting in a real chair, in a stable and low-jitter manner. The system installation allows the observers to freely walk around and navigate themselves in the environment even while still being able to see the avatar from various angles. The project installation provides an exciting way for cultural and historical narratives to be presented vividly in the real present world.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125632541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Simulation of shallow-water waves in coastal region for marine simulator 海洋模拟器中沿海浅水波浪的模拟
Yongjin Li, Yicheng Jin, Yong Yin, Helong Shen
{"title":"Simulation of shallow-water waves in coastal region for marine simulator","authors":"Yongjin Li, Yicheng Jin, Yong Yin, Helong Shen","doi":"10.1145/1477862.1477882","DOIUrl":"https://doi.org/10.1145/1477862.1477882","url":null,"abstract":"This paper presents a new method for simulating shallow-water waves for the marine simulator. Firstly, a sequence of sea surface height fields is achieved by solving 2D Boussinesq type equations. These height fields can exhibit the combined effect of the most shallow-water waves in the coastal region, such as shoaling, refraction, diffraction, reflection and non-linear wave-wave interaction, etc. Secondly, these height fields are synthesized to a new unlimited long sequence by rearranging their order according to their similarity. Finally, the height fields are used as vertex textures sampled by a view-dependent sea surface grid in the new order. Experimental results show that the simulated shallow-water waves have realistic effect with fast rendering speed. It is suitable for the applications of real-time simulation, the marine simulator especially.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
AR-assisted in situ machining simulation: architecture and implementation ar辅助原位加工仿真:体系结构与实现
J. Zhang, S. Ong, A. Nee
{"title":"AR-assisted in situ machining simulation: architecture and implementation","authors":"J. Zhang, S. Ong, A. Nee","doi":"10.1145/1477862.1477897","DOIUrl":"https://doi.org/10.1145/1477862.1477897","url":null,"abstract":"This paper presents a novel implementation of machining simulation in a real machining environment applying Augmented Reality (AR) technology. This in situ machining simulation system allows a machinist to analyze the simulation process, adjust the machining parameters, and observe the results in realtime in a real machining environment. Such a system is useful for machinists and trainees during the trial and learning stages, allowing them to experiment with different machining parameters on a real machine without having to worry about possibilities of machine and tool breakages. Two main functional modules in the system, namely, the tracking and registration module and the physical simulation module are presented in this paper. Experiments and a survey were conducted to validate and evaluate the performance of the system.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130952981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Interacting with 3D objects in a virtual environment using an intuitive gesture system 使用直观的手势系统与虚拟环境中的3D对象进行交互
C. Manders, F. Farbiz, K. Tang, M. Yuan, Bryan Chong, G. G. Chua
{"title":"Interacting with 3D objects in a virtual environment using an intuitive gesture system","authors":"C. Manders, F. Farbiz, K. Tang, M. Yuan, Bryan Chong, G. G. Chua","doi":"10.1145/1477862.1477869","DOIUrl":"https://doi.org/10.1145/1477862.1477869","url":null,"abstract":"We present a system for interacting with 3D objects in a 3D virtual environment. Using the notion that a typical head-mounted display does not cover the user's entire face, we use a fiducial marker placed on the HMD to locate the user's exposed facial skin. Using this information, a skin model is built and combined with the depth information obtained from a stereo camera. The information when used in tandem allows the position of the user's hands to be detected and tracked in real time. Once both hands are located, our system allows the user to manipulate the object with five degrees of freedom (translation in x, y, and z axis with roll and yaw rotations) in virtual three-dimensional space using a series of intuitive hand gestures.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116886215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Robust hand tracking using a simple color classification technique 鲁棒手跟踪使用简单的颜色分类技术
M. Yuan, F. Farbiz, C. Manders, K. Tang
{"title":"Robust hand tracking using a simple color classification technique","authors":"M. Yuan, F. Farbiz, C. Manders, K. Tang","doi":"10.1145/1477862.1477870","DOIUrl":"https://doi.org/10.1145/1477862.1477870","url":null,"abstract":"Skin color is a strong cue in vision-based human tracking. Skin detection has been widely used in various applications, such as face and hand tracking, people detection in the video databases. In this paper, we propose and develop an effective hand tracking method based on a simple color classification. This method includes two major procedures: training and tracking. In the training procedure, the user specifies a region on a hand to obtain the training data. Based on the skin-color distribution, the training data will be classified into several color clusters using randomized list data structure. In the hand tracking procedure, the hand will be segmented in real-time from the background using the randomized lists that have been trained in the training procedure. The proposed method has two advantages: (1) It is fast because the image segmentation algorithm is automatically performed on a small region surrounding the hand; and (2) It is robust under different lighting conditions because the lighting factor is not employed in our effective color classification. Several experiments have been conducted to validate the performance of the proposed method. This proposed method has good potential in many real applications, such as virtual reality or augmented reality systems.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124057134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
GPU-accelerated multi-valued solid voxelization by slice functions in real time gpu加速的实时切片函数多值实体体素化
Duoduo Liao
{"title":"GPU-accelerated multi-valued solid voxelization by slice functions in real time","authors":"Duoduo Liao","doi":"10.1145/1477862.1477886","DOIUrl":"https://doi.org/10.1145/1477862.1477886","url":null,"abstract":"This paper presents a GPU-accelerated slice-independent solid voxelization approach that utilizes a dynamic slice function mechanism and masking techniques to significantly improve solid voxelization speed in real time as well as create various multi-valued solid volumetric models with different slice functions. In particular, by dynamically applying different slice functions, any surface-closed geometric model can be voxelized into a solid volumetric representation with any kind of interior materials, such as rainbow, marble, wood, translucent jade, etc. In this paper, the design of the dynamic slice function, the principle and algorithm of solid slice creation, the algorithm of real-time solid voxelization, and GPU-based acceleration techniques will be discussed in detail. The algorithms introduced in this paper are easy to implement and convenient to integrate into many applications, such as volume modeling, collision detection, medical simulation, volume animation, volume deformation, 3D printing, and computer art. The experimental results and data analysis for the complex objects demonstrate the effectiveness, flexibility, and diversity of this approach.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"708 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122989125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Fast and robust parameter estimation method for patch-based texture synthesis 基于补丁的纹理合成快速鲁棒参数估计方法
Jakrapong Narkdej, P. Kanongchaiyos
{"title":"Fast and robust parameter estimation method for patch-based texture synthesis","authors":"Jakrapong Narkdej, P. Kanongchaiyos","doi":"10.1145/1477862.1477909","DOIUrl":"https://doi.org/10.1145/1477862.1477909","url":null,"abstract":"Patch-based texture synthesis method uses MRF texture model to synthesize a bigger texture from a smaller patch sample containing two user-defined parameters, patch size and boundary zone. To obtain optimal values for the parameters, the texture has to be analyzed, which costs too expensive for real-time large texture synthesis. This paper introduces a more efficient method for finding the optimal value of the two parameters. Firstly, we use graph-based image segmentation to extract feature segments from the input sample. We then choose a set of major segments preserving the main features to appear in the final result. Finally, we calculate the two parameters based on size and repetition of the segments. The experimental results how that our technique can reduce computational time for determining the parameters compared to previous method and can work with several type of textures.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115359054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The feasibility of human haptic emotion as a feature to enhance interactivity and immersiveness on virtual reality game 人类触觉情感作为增强虚拟现实游戏交互性和沉浸感的特征的可行性
A. Basori, D. Daman, A. Bade, M. S. Sunar, N. Saari
{"title":"The feasibility of human haptic emotion as a feature to enhance interactivity and immersiveness on virtual reality game","authors":"A. Basori, D. Daman, A. Bade, M. S. Sunar, N. Saari","doi":"10.1145/1477862.1477910","DOIUrl":"https://doi.org/10.1145/1477862.1477910","url":null,"abstract":"Interactivity and immersiveness are important keys on exploration virtual reality game. Numerous researchers concentrate to augment visual acuity in order to get sensible visualization and animation. In addition, acoustic has also became other aspects to enhance immersiveness of virtual reality game. Recently, there is an issue involving haptic as part of stimulating interactivity between virtual characters and players in order to obtain more attention from players. This paper presents the feasibility of how human haptic emotion being applied in virtual reality game. The idea is that through a haptic device the human emotion is sent to the game engine in order to gain more interactivity and immersiveness. The role of the device is to act as a bridge to deliver emotion sense from virtual character interaction into players from the sense of touch side. We classify each emotion into several different magnitude frequencies that are generated from the haptic devices. The intention of the classification is to tell players about the virtual character emotion during the playing period. This approach will be of great benefit in making virtual reality game more lively and impressive.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125024626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
An exposure invariant video retrieval method for eyetap devices 一种用于眼球轻触装置的曝光不变视频检索方法
L. Chaisorn, C. Manders
{"title":"An exposure invariant video retrieval method for eyetap devices","authors":"L. Chaisorn, C. Manders","doi":"10.1145/1477862.1477914","DOIUrl":"https://doi.org/10.1145/1477862.1477914","url":null,"abstract":"In the field of mediated reality, significant research has been undertaken on the development and use of Eyetap devices. With such a device, it is possible to continuously record a portion of the image entering one or both eyes. However, when using such a device, storage becomes one problem. Fortunately, this may be addressed both by efficient video coding, as well as the fact that information storage is becoming increasing large, and very inexpensive. A possibly greater problem is that given such a large and ever growing image and video collection, retrieval of images and videos becomes a demanding task. In addition, with growing image and video libraries, detection of modified copies increases in importance. In this paper, we present a framework for retrieving images/videos with an Eyetap device, that had also been captured by Eyetap devices, or by other means. We employ the ordinal-based method to tackle image and video indexing and retrieval with the enhanced similarity measure. Because our Eyetap video collection is still quite limited, we instead tested our system on 50 videos obtained from TRECVID 2006. From this experimental result, it is demonstrated that our system is efficient and robust, and thus appropriate for the Eyetap applications we have specified.","PeriodicalId":182702,"journal":{"name":"Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127505691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信