Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry最新文献

筛选
英文 中文
A unified simulation framework for water phase transition based on particles 基于粒子的水相转变统一模拟框架
Chenyu Bian, Shuangjiu Xiao, Zhi Li
{"title":"A unified simulation framework for water phase transition based on particles","authors":"Chenyu Bian, Shuangjiu Xiao, Zhi Li","doi":"10.1145/3284398.3284419","DOIUrl":"https://doi.org/10.1145/3284398.3284419","url":null,"abstract":"Water phase transitions are fundamental and complex phenomena in nature. Previous researches usually studied every process during water phase transitions individually and simulated it separately. This is because the phase transition process of water is very complex. In this paper, we proposed a novel method to simulate the processes of water phase transitions uniformly. We firstly established PBMR (Position Based Material Representation) which is based on PBD (Position Based Dynamics) to describe all three different kinds of water material. And then, we designed a unified computational process to modeled water phase transitions. In our unified computational process, heat transfer mechanism and mass transfer mechanism were main concerns in our consideration. With our method, it is capable to simulate water phase transition uniformly.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129100828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a VR prototype for enhancing earthquake evacuee safety 开发增强地震疏散人员安全的虚拟现实原型
Hui Liang, Fei Liang, Fenglong Wu, Changhai Wang, Jian Chang
{"title":"Development of a VR prototype for enhancing earthquake evacuee safety","authors":"Hui Liang, Fei Liang, Fenglong Wu, Changhai Wang, Jian Chang","doi":"10.1145/3284398.3284417","DOIUrl":"https://doi.org/10.1145/3284398.3284417","url":null,"abstract":"Training and education for enhancing evacuee safety is essential to reduce deaths, injuries and damages from disasters, such as fire and earthquake. However, traditional training approaches, e.g. evacuation drills, hardly simulate the real world emergency, which lead to the limitation of reality and poor interaction. In addition, traditional approaches may not provide investigation of participants' behavior during evacuations and give feedback after training. As a novel and effective alternative to overcome these limitations, in this paper, a VR-based training prototype system is designed and implemented for enhance earthquake evacuation safety. Key modules including earthquake scenario simulation, damage representation, interaction, player investigation and feedback are developed. In the immersive VR environment, players can be provided with learning outcomes as well as behavior feedback as crucial goals for safety training. Based on the result of the evaluation, this prototype has proven to be promising for enhancing earthquake evacuee safety and shows positive pedagogical functions.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130023565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Analyzing the relationship between pressure sensitivity and player experience 分析压力敏感性与玩家体验之间的关系
Henry Fernández, Koji Mikami, K. Kondo
{"title":"Analyzing the relationship between pressure sensitivity and player experience","authors":"Henry Fernández, Koji Mikami, K. Kondo","doi":"10.1145/3284398.3284421","DOIUrl":"https://doi.org/10.1145/3284398.3284421","url":null,"abstract":"This paper summarizes the findings of a study about patterns between the levels of pressure exerted on a gamepad's buttons and the way that players feel when playing. We designed an experiment to trigger different emotions (boredom, frustration, fun) from players when playing a 2D space shooter and analyzed the relationship between pressure and experience. Results show clear trends and a close correlation between pressure and specific players' aspects. Older players tended to press the button harder and players with more experience tended to press it softer. We also found out that there is a strong correlation between pressure and aspects such as difficulty, fun, arousal and dominance, being the correlation: pressure/fun (76.92%) and pressure/dominance (78.57%) the most relevant ones. Finally, frustration, boredom and valence had unclear results, however, trends showed the following: the more frustration, the harder players pressed the button, boredom has an inversely proportional relation with pressure and the results for valence were 61.54% positive, without having a solid final conclusion about this parameter. We propose a parameters classification to carry on with this result in our next step and show how could we design an effective estimation method in the future.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115443110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and evaluation of multiple role-playing in a virtual film set 虚拟电影场景中多人角色扮演的设计与评价
I-Sheng Lin, Tsai-Yen Li, Quentin Galvane, M. Christie
{"title":"Design and evaluation of multiple role-playing in a virtual film set","authors":"I-Sheng Lin, Tsai-Yen Li, Quentin Galvane, M. Christie","doi":"10.1145/3284398.3284424","DOIUrl":"https://doi.org/10.1145/3284398.3284424","url":null,"abstract":"Cinematography affects how the audience perceives a movie. A same story plot can be interpreted differently through the presentation of different camera movements, which show the importance of cinematography in filmmaking. Typically, filmmaking is costly, and beginners and amateurs rarely have the opportunity to play and do an experiment on a film set. In this work, we aim to design and construct a virtual environment for film shooting, allowing a user to play multiple roles in a virtual film set and emulating the process of the filmmaking. Our system provides camera shooting assistants, tools for field directing and real-time editing, aiming to help novices learn cinematographic concepts, track the progress of filmmaking, and create a personalized movie. In order to verify that our system is a user-friendly and effective tool for experiencing filmmaking, we have conducted an experiment to observe the behaviors and obtain feedback from participants with various cinematographic backgrounds.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122790626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CamBridge: a bridge of camera aesthetic between virtual environment designers and players 剑桥:虚拟环境设计师和玩家之间的相机美学桥梁
Chanchan Xu, Guangzheng Fei, Honglei Han
{"title":"CamBridge: a bridge of camera aesthetic between virtual environment designers and players","authors":"Chanchan Xu, Guangzheng Fei, Honglei Han","doi":"10.1145/3284398.3284423","DOIUrl":"https://doi.org/10.1145/3284398.3284423","url":null,"abstract":"The designer of the virtual environment have been trying for decades to provide the player with more enjoyable, comfortable and also informative user experiences, and yet still fail to ensure that the player follow the preset instructions and even implicit suggestions faithfully and naturally, due to the designer's invisibility during runtime, and the player's individual diversity and individual impromptu in manipulations. We believe that the camera is the mainly messenger for the designer and the player to communicate, and intend to build a bridge between them. By binding the designer's aesthetic ideas to the parameters of the camera's movement, we enable the player to roam in the virtual scene with the guidance from the designer. We also propose a navigation guiding language (NGL) to assist the binding and the guiding process. A user study is made to evaluate the performance of our method. Experiments and questionnaires have shown that our method can offer a more attentive and pleasing experience to the player with implicit guidance.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"280 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131680365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PROME 卑谬
Nianchen Deng, Xubo Yang, Yanqing Zhou
{"title":"PROME","authors":"Nianchen Deng, Xubo Yang, Yanqing Zhou","doi":"10.1145/3284398.3284406","DOIUrl":"https://doi.org/10.1145/3284398.3284406","url":null,"abstract":"Human parametric models can provide useful constraints for human shape estimation to produce more accurate results. However, the state-of-art models are computational expensive which limit their wide use in interactive graphics applications. We present PROME (PROjected MEasures) - a novel human parametric model which has high expressive power and low computational complexity. Projected measures are sets of 2D contour poly-lines that capture key measure features defined in anthropometry. The PROME model builds the relationship between 3D shape and pose parameters and 2D projected measures. We train the PROME model in two parts: the shape model formulates deformations of projected measures caused by shape variation, and the pose model formulates deformations of projected measures caused by pose variation. Based on the PROME model we further propose a fast shape estimation method which estimates the 3D shape parameters of a subject from a single image in nearly real-time. The method builds an optimize problem and solves it using gradient optimizing strategy. Experiment results show that the PROME model has well capability in representing human body in different shape and pose comparing to existing 3D human parametric models, such as SCAPE[Anguelov et al. 2005] and TenBo[Chen et al. 2013], yet keeps much lower computational complexity. Our shape estimation method can process an image in about one second, orders of magnitude faster than state-of-art methods, and the estimating result is very close to the ground truth. The proposed method can be widely used in interactive applications such as virtual try-on and virtual reality collaboration.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"46 50","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion of wifi and vision based on smart devices for indoor localization 基于智能设备的wifi与视觉融合室内定位
Jing Guo, Shaobo Zhang, Wanqing Zhao, Jinye Peng
{"title":"Fusion of wifi and vision based on smart devices for indoor localization","authors":"Jing Guo, Shaobo Zhang, Wanqing Zhao, Jinye Peng","doi":"10.1145/3284398.3284401","DOIUrl":"https://doi.org/10.1145/3284398.3284401","url":null,"abstract":"Indoor localization is an important problem with a wide range of applications such as indoor navigation, robot mapping, especially augmented reality(AR). One of most important tasks in AR technology is to estimate the target objects' position information in real environment. The existed AR systems mostly utilize specialized marker to locate, some AR systems track real 3D object in real environment but need to get the the position information of index points in environment in advance. The above methods are not efficiency and limit the application of AR system, so that solving indoor localization problem has significant meaning for the development of AR technology. The development of computer vision (CV) techniques and the ubiquity of intelligent devices with cameras provides the foundation for offering accurate localization services. However, pure CV-based solutions usually involve hundreds of photos and pre-calibration to construct an densely sampled 3D model, which is a labor-intensive overhead for practical deployment. And a large amount of computation cost is difficult to satisfy the requirement for efficiency in mobile device. In this paper, we present iStart, a lightweight, easy deployed, image-based indoor localization system, which can be run on smart phone and VR/AR devices like HTC Vive, Google Glasses and so on. With core techniques rooted in data hierarchy scheme of WiFi fingerprints and photos, iStart also acquires user localization with a single photo of surroundings with high accuracy and short delay. Extensive experiments in various environments show that 90 percentile location deviations are less than 1 m, and 60 percentile location deviations are less than 0.5 m.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125412110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A VR-based, hybrid modeling approach to fire evacuation simulation 一种基于vr的火灾疏散模拟混合建模方法
Tao Gu, Changbo Wang, Gaoqi He
{"title":"A VR-based, hybrid modeling approach to fire evacuation simulation","authors":"Tao Gu, Changbo Wang, Gaoqi He","doi":"10.1145/3284398.3284409","DOIUrl":"https://doi.org/10.1145/3284398.3284409","url":null,"abstract":"VR-based simulation could significantly improve the user experience by offering users vivid and near-life visual scenes, hence helping users better handle dangerous situations safely such as fire accidents. In this paper, we design a fire evacuation simulation system and propose a hybrid crowd evacuation modeling and simulation approach, which is a layer-based model adopting both local and global techniques partially into different layers. In essence, this model integrates an agent-based model with an improved dynamical network flow model, which is capable of taking into account issues both from individual diversity and from crowd movement tendency to simulate crowd evacuation. An emergency response mechanism driven by videos is then designed according to the model. Once fire accidents are detected in videos, the system will first simulate accidents according to the fire level provided by the monitoring module and then start an evacuation routine or adjust evacuation routes. The simulation system can be experienced by users in a virtual environment. Finally, evaluations have been conducted to test the rationality of our model and results show that the proposed model can simulate the crowd movement and agent behavior in dynamic environments efficiently.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116848510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Facial tracking and animation for digital social system 面向数字社交系统的面部跟踪与动画
Dongjin Huang, Yuanqiu Yao, Wen Tang, Youdong Ding
{"title":"Facial tracking and animation for digital social system","authors":"Dongjin Huang, Yuanqiu Yao, Wen Tang, Youdong Ding","doi":"10.1145/3284398.3284413","DOIUrl":"https://doi.org/10.1145/3284398.3284413","url":null,"abstract":"Avatar expression appearing in the virtual social space is one of the key technologies to convey people's emotions and facilitate the social interactions effectively via the virtual social system. Aiming at lack of feasible solutions for synchronized facial expressions in current commercial virtual social systems, this paper presented a virtual social system with the focus on real-time avatar facial expressions. Firstly, cascaded pose regression was adopted to train a dynamic expression model to infer the expression coefficients from 2D video frames, and the facial landmarks in regression were extracted by supervised descent method instead of 2D cascaded pose regression to achieve better robustness and fault tolerance in facial tracking and animation. Secondly, we proposed a multi-scale adaptive expression coding technology for expression-voice data synchronization and striking balance between real-time and richness of facial expressions in varied complex network situations. The experimental results show that the proposed facial tracking and animation system is practical and feasible, and could produce a high degree of realistic emotional cues in virtual social system.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131069913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image recoloring for home scene 为家庭场景的图像重新着色
Xianxuan Lin, Xun Wang, Frederick W. B. Li, Bailin Yang, Kaili Zhang, T. Wei
{"title":"Image recoloring for home scene","authors":"Xianxuan Lin, Xun Wang, Frederick W. B. Li, Bailin Yang, Kaili Zhang, T. Wei","doi":"10.1145/3284398.3284404","DOIUrl":"https://doi.org/10.1145/3284398.3284404","url":null,"abstract":"Indoor home scene coloring technology is a hot topic for home design, helping users make home coloring decisions. Image based home scene coloring is preferable for e-commerce customers since it only requires users to describe coloring expectations or manipulate colors through images, which is intuitive and inexpensive. In contrast, if home scene coloring is performed based on 3D scenes, the process becomes expensive due to the high cost and time in obtaining 3D models and constructing 3D scenes. To realize image based home scene coloring, our framework can extract the coloring of individual furniture together with their relationship. This allows us to formulate the color structure of the home scene, serving as the basis for color migration. Our work is challenging since it is not intuitive to identify the coloring of furniture and their parts as well as the coloring relationship among furniture. This paper presents a new color migration framework for home scenes. We first extract local coloring from a home scene image forming a regional color table. We then generate a matching color table from a template image based on its color structure. Finally we transform the target image coloring based on the matching color table and well maintain the boundary transitions among image regions. We also introduce an interactive operation to guide such transformation. Experiments show our framework can produce good results meeting human visual expectations.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116516870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信