Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1最新文献

筛选
英文 中文
An inverse rendering approach for heterogeneous translucent materials 非均质半透明材料的逆向渲染方法
Jingjie Yang, Shuangjiu Xiao
{"title":"An inverse rendering approach for heterogeneous translucent materials","authors":"Jingjie Yang, Shuangjiu Xiao","doi":"10.1145/3013971.3013973","DOIUrl":"https://doi.org/10.1145/3013971.3013973","url":null,"abstract":"Since heterogeneous translucent materials, such as natural jades and marble, are complex hybrids of different materials, it is difficult to set precise optical parameters for subsurface scattering model as the material really has. In this paper, an inverse rendering approach is presented for heterogeneous translucent materials from a single input photograph. Given one photograph with an object of a certain heterogeneous translucent material, our approach can generate material distribution and estimate heterogeneous optical parameters to render images that look similar to the input photograph. We initialize material distribution using 3D Simplex Noise combined with Fractal Brownian Motion, and set color pattern of the noise using histogram matching method. The volume data with heterogeneous optical parameters is initialized based on the value of color pattern matched noise, and it is rendered in a certain lighting condition using Monte Carlo ray marching method. An iteration process is designed to approximate optical parameters to minimize the difference between rendering result and input photograph. Then the volume data with optimal heterogeneous optical parameters is obtained, which can be used for rendering any geometry model in different lighting conditions. Experimental results show that heterogeneous translucent objects can be rendered precisely similar to the material in the photograph with our approach.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125315495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A mesh-insensitive elastic model for simulation of solid bodies 模拟实体的网格不敏感弹性模型
Juan Zhang, Mingquan Zhou, Guoguang Du, Youliang Huang, Zhongke Wu, X. Wang
{"title":"A mesh-insensitive elastic model for simulation of solid bodies","authors":"Juan Zhang, Mingquan Zhou, Guoguang Du, Youliang Huang, Zhongke Wu, X. Wang","doi":"10.1145/3013971.3013972","DOIUrl":"https://doi.org/10.1145/3013971.3013972","url":null,"abstract":"FEM-based elasticity models are popular in solid body simulation. To avoid its problems of mesh sensitivity and overly stiff, a novel smoothed pseudo-linear elasticity model is presented. First, the smoothed finite element method is employed to alleviate mesh distortion and overly stiff problems instead of the traditional spatial adaptive smoothing method. Then, we propose a smoothing domain-based stiffness warping technique to compensate the nonlinear errors introduced by linear elasticity models. With this approach, transient displacements are slightly affected by mesh distortion and total volumes are preserved under large rotations. It also shows apparently softening effects in the experiments. Simulation results are generated without adding significant complexity or computational cost to the standard corotational FEM.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125717291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic framework for interactive animation generation 交互式动画生成的语义框架
Hui Liang, Jian Chang, Meili Wang, Can Chen, Jianjun Zhang
{"title":"Semantic framework for interactive animation generation","authors":"Hui Liang, Jian Chang, Meili Wang, Can Chen, Jianjun Zhang","doi":"10.1145/3013971.3013998","DOIUrl":"https://doi.org/10.1145/3013971.3013998","url":null,"abstract":"As a key technology, interactive animation has been widely used in Virtual Reality nowadays which is important for the success of various VR applications. However, despite many years history and the ability to produce stunning illusory and immersive effects for many VR applications, interactive animation generation remains one of the most labour-intensive ones. As a tedious task, its generation requires a complicated systematic procedure. Its inherent functional features involve a lot of complex technical problems: various aspects of requirements have to be handled, from the HCI technologies to the efficient management of animation data assets. With the continual evolvement of the complexity of current interactive VR technologies and animation practices, systematic and standardised description is imperative to provide a clear understanding of this production process. In this paper, a semantic framework is constructed at an abstract and semantic level using ontological methods for modelling the construction of interactive animation generation. To facilitate the process of interactive animation generation and improve its reusability and modularity, domain-specific ontologies based on the semantic framework are defined by formalising the multimodal interaction method and the construction of the animation data assets repository at the ontological implementation level. Finally, hand-gesture-based interactive animation is generated in the context of traditional Chinese shadow play, which involves novel functional features like hand-gesture-based interactive control and ontology-based intelligent animation assets management.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127422849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Crowd formation via hierarchical planning 通过分层规划形成人群
Xingce Wang, Na Liu, Shaolong Liu, Zhongke Wu, Mingquan Zhou, Jiale He, Peng Cheng, C. Miao, N. Magnenat-Thalmann
{"title":"Crowd formation via hierarchical planning","authors":"Xingce Wang, Na Liu, Shaolong Liu, Zhongke Wu, Mingquan Zhou, Jiale He, Peng Cheng, C. Miao, N. Magnenat-Thalmann","doi":"10.1145/3013971.3013978","DOIUrl":"https://doi.org/10.1145/3013971.3013978","url":null,"abstract":"Team formation with realistic crowd simulation behaviour is a challenge in computer graphics, multi-agent control and social simulation. In this paper, we propose a framework of crowd formation via hierarchical planning, which includes cooperative-task, coordinated-behaviour and action-control planning. In cooperative-task planning, we improve the grid potential field to achieve global path planning for a team. In coordinated-behaviour planning, we propose a time-space table to arrange behaviour scheduling for a movement. In action-control planning, we combine the gaze-movement angle model and fuzzy logic control to achieve agent action. Our method has several advantages: (1) The hierarchical architecture is guaranteed to match the human decision process from high to low intelligence. (2) The agent plans his behaviour only with the local information of his neighbour; the global intelligence of the group emerges from these local interactions. (3) The time-space table fully utilizes the three-dimensional information. Our method is verified using crowds of various densities, from sparse to dense crowds, using quantitative performance measures. The approach is independent of the simulation model and can be extended to other crowd simulation tasks.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116637572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Short-time activity recognition with wearable sensors using convolutional neural network 基于卷积神经网络的可穿戴传感器短时间活动识别
Min Sheng, Jing-Jiang Jiang, Benyue Su, Qingfeng Tang, A. Yahya, Guangjun Wang
{"title":"Short-time activity recognition with wearable sensors using convolutional neural network","authors":"Min Sheng, Jing-Jiang Jiang, Benyue Su, Qingfeng Tang, A. Yahya, Guangjun Wang","doi":"10.1145/3013971.3014016","DOIUrl":"https://doi.org/10.1145/3013971.3014016","url":null,"abstract":"Human activity recognition is still a challenging problem in particular environment. In this paper, we propose a novel method based on wearable sensors to effectively recognize the short-time human activity. Our proposed method is based on two stages: First stage, constructing an over-complete pattern library which includes different patterns of short-time human activity. This library is produced by segmenting a long-time activity with sliding window method. Second stage, extracting robust features from an over-completed pattern library and establishing an off-line classification model through convolutional neural network (CNN). Consequently, an outstanding classification result on benchmark database WARD1.0 is successfully achieved based on the previous idea. Experimental results indicate that the proposed method is able to recognize the short-time human activity and at the same time satisfy the requirement of online recognition.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131074356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A two-dimensional editing system based on hand gestures and real object manipulation using RGB-D sensor 基于RGB-D传感器的基于手势和真实物体操作的二维编辑系统
Hsin-Kai Chen, Sai-Keung Wong, Po-Han Huang, Yu-An Chang
{"title":"A two-dimensional editing system based on hand gestures and real object manipulation using RGB-D sensor","authors":"Hsin-Kai Chen, Sai-Keung Wong, Po-Han Huang, Yu-An Chang","doi":"10.1145/3013971.3013975","DOIUrl":"https://doi.org/10.1145/3013971.3013975","url":null,"abstract":"This paper presents a two-dimensional editing system which compromises the advantages of using hand gestures and real blocks to manipulate virtual objects. The system uses a RGB-D sensor to capture the color and depth images of the hands and real blocks. The raw images of the hands and blocks are segmented. Then we employ a contour based algorithm to perform hand gesture and shape recognition. The contours are extracted and reduced for eliminating the redundant points. After that a set of feature points are collected and they can be used for recognizing the hand gesture and the shapes of the real blocks. The users have two ways to interact with the virtual objects: 1) manipulating the real blocks to control them; and 2) using hand gestures as editing operations to edit them. Experimental results indicate that our system achieves high recognition rate. Our system supports the basic editing operations to the virtual objects, such as 'move', 'scale', 'rotate', and 'copy'. We also conducted a user study which reveals that our system is intuitive and entertaining.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133311940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed reality based interaction system for digital heritage 基于混合现实的数字遗产交互系统
Nishant Bugalia, Subodh Kumar, P. Kalra, Shantanu Choudhary
{"title":"Mixed reality based interaction system for digital heritage","authors":"Nishant Bugalia, Subodh Kumar, P. Kalra, Shantanu Choudhary","doi":"10.1145/3013971.3014000","DOIUrl":"https://doi.org/10.1145/3013971.3014000","url":null,"abstract":"User interfaces that leverages vivid computer graphics and virtual reality technologies provide effective means for exploring and inspecting cultural artifacts. Such virtual inspection systems help experience and disseminate cultural heritage. For such systems to be truly ubiquitous, they need to expose a user-friendly interface that an untrained population can find intuitive and engaging. In this paper, we present one such mixed reality (MR) based interaction system that can be used to visualize as well as explore cultural heritage. The system is demonstrated by an interactive exploration of 15th century 'Vittala Temple' located at a UNESCO world heritage site, Hampi. Our system investigates the augmentation of computer generated imagery with a physical replica to increase immersion and awareness. It uses monoscopic projection mapping on the 3D-printed replica laid out on a tabletop along with a vertical projection wall behind the tabletop, on which a 3D-rendered view is displayed. Projection on the physical layout is synchronized with the 3D rendering to allow projection of texture, navigation path, users current position, etc. on the tabletop. Users can interact with this system by simply waving a laser pointer to select structures or to define a path. We have additionally explored the benefit of using a steering wheel to further control the camera direction. Such a system creates and maintains a semantic relation between users actions and the resulting effect. The UI enables two major functionalities: interactive inspection of an architectural complex and the multimedia data attached to various components and artifacts.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127828532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A mixed-reality museum tourism framework based on HMD and fisheye camera 基于HMD和鱼眼相机的混合现实博物馆旅游框架
Shaobo Zhang, Wanqing Zhao, J. Wang, Hangzai Luo, Xiaoyi Feng, Jinye Peng
{"title":"A mixed-reality museum tourism framework based on HMD and fisheye camera","authors":"Shaobo Zhang, Wanqing Zhao, J. Wang, Hangzai Luo, Xiaoyi Feng, Jinye Peng","doi":"10.1145/3013971.3014023","DOIUrl":"https://doi.org/10.1145/3013971.3014023","url":null,"abstract":"Mixed-reality (MR) combines virtual and physical components with display device, which can be used to construct new environments where physical and digital objects co-exist and interact in real time. In this paper, we introduce a MR museum tourism system based popular HMD and fisheye camera, which can provide a richer experience by overlying the virtual model in the real environment. First, object detection and tracking methods are used to identify the real object and to obtain its geometrical information from camera. Then, the orientation relationship between real object and camera is decided to convert them to the same coordinate system. Finally, the corresponding model is rendered in the HMD device. Experiments results show that our proposed system can accurately and stably capture the real objects and render the corresponding virtual model in real time.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"186 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124929856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Position based balloon angioplasty 位置球囊血管成形术
P. Tang, Dongjin Huang, Yin Wang, Ruobin Gong, Wen Tang, Youdong Ding
{"title":"Position based balloon angioplasty","authors":"P. Tang, Dongjin Huang, Yin Wang, Ruobin Gong, Wen Tang, Youdong Ding","doi":"10.1145/3013971.3013996","DOIUrl":"https://doi.org/10.1145/3013971.3013996","url":null,"abstract":"Balloon angioplasty is an endovascular procedure to widen narrowed or obstructed blood vessels, typically to treat arterial atherosclerosis. Simulating angioplasty procedure in the complex vascular structures is a challenge task since the balloon and vessels are both flexible bodies. In this paper, we proposed a position based balloon physical model to solve nonlinear physical deformation in the process of balloon inflation. Firstly, the balloon is discrete modeled by the closed triangle mesh, and the hyperelastic membrane material and continuum based formulation are combined to compute the mechanical properties in the process of balloon inflation. Then, an adaptive air mesh generation algorithm is proposed as a preprocessing procedure for accelerating the coming collision process between the balloon and blood vessels according to the characteristic of collision region which is relative fixed. The experiment results show that this physical model is feasible, which could simulate the contact and deformation process between the inflation balloon and the diseased blood vessel wall with good robustness and in real-time.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132905124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MDIS cloth system: virtual reality technology for firefighter training MDIS布衣系统:虚拟现实技术用于消防员培训
Fangjie Yu, Xinlei Hu, Chunyong Ma, Yang Zhao, Yunfei Liu, Fan Yang, Ge Chen
{"title":"MDIS cloth system: virtual reality technology for firefighter training","authors":"Fangjie Yu, Xinlei Hu, Chunyong Ma, Yang Zhao, Yunfei Liu, Fan Yang, Ge Chen","doi":"10.1145/3013971.3013977","DOIUrl":"https://doi.org/10.1145/3013971.3013977","url":null,"abstract":"Fire accidents can cause numerous casualties and heavy property losses, especially, in petrochemical industry, such accidents are likely to cause secondary disasters. However, common fire drill training would cause loss of resources and pollution. We designed a multi-dimensional interactive somatosensory (MDIS) cloth system based on virtual reality technology to simulate fire accidents in petrochemical industry. It provides a vivid visual and somatosensory experience. A thermal radiation model is built in a virtual environment, and it could predict the destruction radius of a fire. The participant position changes are got from Kinect, and shown in virtual environment synchronously. The somatosensory cloth, which could both heat and refrigerant, provides temperature feedback based on thermal radiation results and actual distance. In this paper, we demonstrate the details of the design, and then verified its basic function. Heating deviation from model target is lower than 3.3 °C and refrigerant efficiency is approximately two times faster than heating efficiency.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125652044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信