Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry最新文献

筛选
英文 中文
ToonNet: a cartoon image dataset and a DNN-based semantic classification system ToonNet:卡通图像数据集和基于 DNN 的语义分类系统
Yanqing Zhou, Yongxu Jin, Anqi Luo, Szeyu Chan, Xiangyun Xiao, Xubo Yang
{"title":"ToonNet: a cartoon image dataset and a DNN-based semantic classification system","authors":"Yanqing Zhou, Yongxu Jin, Anqi Luo, Szeyu Chan, Xiangyun Xiao, Xubo Yang","doi":"10.1145/3284398.3284403","DOIUrl":"https://doi.org/10.1145/3284398.3284403","url":null,"abstract":"Cartoon-style pictures can be seen almost everywhere in our daily life. Numerous applications try to deal with cartoon pictures, a dataset of cartoon pictures will be valuable for these applications. In this paper, we first present ToonNet: a cartoon-style image recognition dataset. We construct our benchmark set by 4000 images in 12 different classes collected from the Internet with little manual filtration. We extend the basal dataset to 10000 images by adopting several methods, including snapshots of rendered 3D models with a cartoon shader, a 2D-3D-2D converting procedure using a cartoon-modeling method and a hand-drawing stylization filter. Then, we describe how to build an effective neural network for image semantic classification based on ToonNet. We present three techniques for building the Deep Neural Network (DNN), namely, IUS: Inputs Unified Stylization, stylizing the inputs to reduce the complexity of hand-drawn cartoon images; FIN: Feature Inserted Network, inserting intuitionistic and valuable global features into the network; NPN: Network Plus Network, using multiple single networks as a new mixed network. We show the efficacy and generality of our network strategies in our experiments. By utilizing these techniques, the classification accuracy can reach 78% (top-1) and 93%(top-3), which has an improvement of about 5% (top-1) compared with classical DNNs.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131076603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Deformation simulation of non-orthotropic materials 非正交异性材料的变形模拟
Wei Cao, Xiaohua Ren, Luan Lyu, E. Wu
{"title":"Deformation simulation of non-orthotropic materials","authors":"Wei Cao, Xiaohua Ren, Luan Lyu, E. Wu","doi":"10.1145/3284398.3284400","DOIUrl":"https://doi.org/10.1145/3284398.3284400","url":null,"abstract":"Physically based deformation simulation has been studied for many years in computer graphics. In order to simulate more complex materials and better meet the designer's requirements, the anisotropic approach was proposed in recent years. However, most of the existing approaches focus on orthotropic materials. In this paper, a general non-orthogonal constitutive model is presented to simulate the anisotropic deformation behavior for 3D soft objects. The model exhibits different deformation behaviors in different directions by constructing a non-orthogonal coordinate system with covariant and contravariant basis vectors. The constitutive relation between stress and strain is first defined in the non-orthogonal coordinate system, and then transformed into the standard Cartesian coordinate system to represent the global non-orthotropic materials. In addition, a time-varying method is introduced to track changes of the local coordinate system for each discrete element during deformation, which makes the simulation of non-orthotropic materials more stable. Finally, in order to present desirable features for objects with complex structure, the deformable objects are partitioned into several regions according to their skeletons with the combination of frame-field concept. A corotational linear Finite Element Method(CLFEM) is utilized to complete the simulation. Experiments are presented to demonstrate the efficiency of the non-orthogonal constitutive model.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125914303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Arriving light control for color vision deficiency compensation using optical see-through head-mounted display 利用光学透明头戴式显示器进行色觉缺陷补偿的到达光控制
Ying Tang, Zhenyang Zhu, M. Toyoura, K. Go, K. Kashiwagi, I. Fujishiro, Xiaoyang Mao
{"title":"Arriving light control for color vision deficiency compensation using optical see-through head-mounted display","authors":"Ying Tang, Zhenyang Zhu, M. Toyoura, K. Go, K. Kashiwagi, I. Fujishiro, Xiaoyang Mao","doi":"10.1145/3284398.3284407","DOIUrl":"https://doi.org/10.1145/3284398.3284407","url":null,"abstract":"Color vision deficiency (CVD), also known as color blindness, is commonly caused by genetic disorder. Unfortunately, as of 2018, there is not yet a cure for the condition. Contact lenses and glasses with color filter are possible solutions to CVD by applying uniform changes to user's field of view (FoV). On the other hand, optical see-through head-mounted display (OST-HMD) can provide a controllable overlay to user's FoV, which could help for making a better solution. To calibrate colors in FoV of a user with CVD, methods that often used, such as the daltonization process, need light reduce feature, which makes calibrated color darker. However, recent commercially available OST-HMDs don't have a controllable way to decrease the brightness of incoming light. In this paper, we present an approach for light subtraction of OST-HMD using a transmissive LCD panel. A prototype system for achieving a controllable overlay to user's FoV with OST-HMD by using scene camera, user-perspective camera, and the transmissive LCD panel was implemented.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"332 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117062960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Polygon reduction for collision using navigation Voxel and QEM 使用导航体素和QEM进行碰撞多边形减少
Eitaro Iwabuchi, Koji Mikami
{"title":"Polygon reduction for collision using navigation Voxel and QEM","authors":"Eitaro Iwabuchi, Koji Mikami","doi":"10.1145/3284398.3284428","DOIUrl":"https://doi.org/10.1145/3284398.3284428","url":null,"abstract":"This paper presents an efficient approach for polygon reduction for collision. In general, collision models are used to detect intersection with the player model. In game production, an artist can use software like SIMPLYGON (by Microsoft Corporation) with a weight painting future to create a collision model. Weight painting work is time consuming for the artist because there are a lot of environment maps that need to be painted. Also, a lot of teams don't have access to software like SIMPLYOGN and need to create the collision models by hand. This paper proposes an approach to detect an area that a player can walk around in the form of navigation voxels. This means it can detect which areas need more polygons and which areas need less. Then it can use this information to reduce the polygons where needed. This method keeps the artist workflow in mind and it can reduce amount of work the artist is required to do.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123575253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry 第16届ACM SIGGRAPH虚拟现实连续体及其工业应用国际会议论文集
{"title":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","authors":"","doi":"10.1145/3284398","DOIUrl":"https://doi.org/10.1145/3284398","url":null,"abstract":"","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"790 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122995688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An asymmetric collaborative system for architectural-scale space design 建筑尺度空间设计的非对称协同系统
Yuta Sugiura, Hikaru Ibayashi, T. Chong, Daisuke Sakamoto, N. Miyata, M. Tada, T. Okuma, T. Kurata, T. Shinmura, M. Mochimaru, T. Igarashi
{"title":"An asymmetric collaborative system for architectural-scale space design","authors":"Yuta Sugiura, Hikaru Ibayashi, T. Chong, Daisuke Sakamoto, N. Miyata, M. Tada, T. Okuma, T. Kurata, T. Shinmura, M. Mochimaru, T. Igarashi","doi":"10.1145/3284398.3284416","DOIUrl":"https://doi.org/10.1145/3284398.3284416","url":null,"abstract":"We present a system that facilitates asymmetric collaboration among users with two different viewpoints in the design of living or working spaces. One viewpoint is that of the space designers, who observe and alter the space from a top-down view using a large table-top interface. The other viewpoint is that of a space occupant, who observes the space through internal views using a head-mounted display. We conducted two studies to understand how our system support users in architectural-scale space design. One is about preliminary user study to observe general behavior to Dollhouse VR system, and the other one is a case study that users are actual employees of restaurant and discuss rearrangement of floor by moving tables and chairs in virtual environment. Results showed that the system supports a pair of interaction techniques that could facilitate communication between these two user viewpoints.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"65 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114180816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Projection mapping based on BRDF reconstruction from single RGBD image 基于单幅RGBD图像BRDF重构的投影映射
Linling Xun, Shuangjiu Xiao, Chenyu Bian, Jiheng Jiang
{"title":"Projection mapping based on BRDF reconstruction from single RGBD image","authors":"Linling Xun, Shuangjiu Xiao, Chenyu Bian, Jiheng Jiang","doi":"10.1145/3284398.3284410","DOIUrl":"https://doi.org/10.1145/3284398.3284410","url":null,"abstract":"There have been many researches on projection mapping focused on target objects tracking, geometric shape recovering or virtual materials simulating such as clothes. However, few people pay attention to the material of target object which actually influences the visual results of projection. We present a new projection mapping framework based on BRDF reconstruction for the goal of more real projection results by enhancing the effects of Augmented Reality. In the framework, 3D computer vision method is used to reconstruct the BRDF of target object with a single RGBD image. A new algorithm is proposed using two Convolutional Neural Networks(CNN) which can predict both normal map and reflectance map of the target surface simultaneously with the RGBD image. The predicted maps are used to render the content to be projected onto the target object. Our BRDF reconstruction algorithm can recover several materials in one scene correctly in use of just one image. Experimental results show our framework has impressive performance and relatively accurate consequence.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116732479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical evaluation of labelling method in augmented reality 增强现实中标签方法的实证评价
Gang Li, Yue Liu, Yongtian Wang
{"title":"An empirical evaluation of labelling method in augmented reality","authors":"Gang Li, Yue Liu, Yongtian Wang","doi":"10.1145/3284398.3284422","DOIUrl":"https://doi.org/10.1145/3284398.3284422","url":null,"abstract":"In an augmented reality system, labelling technique is a very useful assistant technique for browsing and understanding unfamiliar objects or environments, through which the superimposed virtual labels of words or pictures on the real scene provide convenient information to the viewers, expand the recognition to area of interests and promote the interaction with real scene. How to design the layout of labels in user's field of view, keep the clarity of virtual information and balance the ratio between virtual information and real scene information is a key problem in the field of view management. This paper presents the empirical results extracted from experiment aiming at the user's visual perception to labelling layout, which reflects the subjective preferences to different factors influencing the labelling result. Statistical analysis of the experiment results shows the intuitive visual judgement accomplished by subjects. The quantitative measurement of clutter indicates the change induced by labels on real scene, therefore contributing the label design on view management in future.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"126 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114090162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Interactive scenario visualisation in immersive virtual environments for decision making support 沉浸式虚拟环境中的交互式场景可视化,用于决策支持
Daniel Filonik, Amy Buchan, Lucy Ogden-Doyle, T. Bednarz
{"title":"Interactive scenario visualisation in immersive virtual environments for decision making support","authors":"Daniel Filonik, Amy Buchan, Lucy Ogden-Doyle, T. Bednarz","doi":"10.1145/3284398.3284426","DOIUrl":"https://doi.org/10.1145/3284398.3284426","url":null,"abstract":"This paper describes the design and implementation of Sensorland, an immersive environment developed to support an existing policy making framework. In particular, our aim was to visualise future application scenarios for sensor systems in two different contexts: (1) A highly automated mining environment, and (2) A natural disaster response situation. For this purpose, two virtual reality (VR) environments for HTC Vive were created using Unity 3D game engine. Finally, the opportunities and pitfalls of the VR medium for presenting information to support policy making are discussed. We make several recommendations for the future development of the content creation process, and consider its applicability in other contexts.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"338 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134159439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual analytics of single cell microscopy data using a collaborative immersive environment 使用协作沉浸式环境的单细胞显微镜数据的可视化分析
J. Lock, Daniel Filonik, R. Lawther, N. Pather, K. Gaus, S. Kenderdine, T. Bednarz
{"title":"Visual analytics of single cell microscopy data using a collaborative immersive environment","authors":"J. Lock, Daniel Filonik, R. Lawther, N. Pather, K. Gaus, S. Kenderdine, T. Bednarz","doi":"10.1145/3284398.3284412","DOIUrl":"https://doi.org/10.1145/3284398.3284412","url":null,"abstract":"Understanding complex physiological processes demands the integration of diverse insights derived from visual and quantitative analysis of bio-image data, such as microscopy images. This process is currently constrained by disconnects between methods for interpreting data, as well as by language barriers that hamper the necessary cross-disciplinary collaborations. Using immersive analytics, we leveraged bespoke immersive visualizations to integrate bio-images and derived quantitative data, enabling deeper comprehension and seamless interaction with multi-dimensional cellular information. We designed and developed a visualization platform that combines time-lapse confocal microscopy recordings of cancer cell motility with image-derived quantitative data spanning 52 parameters. The integrated data representations enable rapid, intuitive interpretation, bridging the divide between bio-images and quantitative information. Moreover, the immersive visualization environment promotes collaborative data interrogation, supporting vital cross-disciplinary collaborations capable of deriving transformative insights from rapidly emerging bio-image big data.","PeriodicalId":340366,"journal":{"name":"Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122473709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信