{"title":"Research on camouflage effect evaluation method of moving object based on video","authors":"Juntang Yang, Weidong Xu, Qingkai Qiu, Yang Qu","doi":"10.1145/3013971.3013976","DOIUrl":"https://doi.org/10.1145/3013971.3013976","url":null,"abstract":"At present, the detection and evaluation method of camouflage effect is mainly aimed at the static target, it cannot objectively reflect the effectiveness of the camouflage effect of the moving target in the operational action. In this paper, the moving object detection technology is combined with the principle of camouflage was comprehensively used. Using Hausdorff distance and color difference minimum principal color similarity vector algorithm to calculate the degree of distortion, target and center background similarity. In order to check the camouflage effect under the condition of moving target, dynamic distortion frame ratio, dynamic similarity frame ratio are proposed based on video stream. In this paper, the experiment is carried out with the step vehicle as the moving object, calculating the deformation degree to achieve the goal of dynamic deformation of 0.4 frame rate ratios were 70.5% and 0% under the two kinds of camouflage methods, The similarity reaches the target frame rate of dynamic similarity ratio of 0.36 were 89.8% and 0%, This shows that the dynamic camouflage effect of the target after digital camouflage painting is better than the target which used green painting. When the target deformation degree and the target and background similarity reaches a value of dynamic deformation of frame frequency ratio and dynamic similarity frame rate ratio are bigger, indicating that the better the dynamic camouflage effect is.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133636938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An immersive approach to the visual exploration of geospatial network datasets","authors":"Meng-Jia Zhang, Jie Li, Kang Zhang","doi":"10.1145/3013971.3013983","DOIUrl":"https://doi.org/10.1145/3013971.3013983","url":null,"abstract":"Classic geospatial network visualization tends to limit itself to 2D representation by organizing edges and nodes on a 2D map or the external surface of a traditional globe model. Due to the visual clutter caused by a large amount of edge crossings and node-edge overlaps, efficient exploration of geospatial network visualization becomes challenging when the positions of the nodes in the network data are unchangeable as they carry useful geographical information. This paper proposes the Sphere Immersive Model and a dedicated VR interaction method for the intuitive exploration of geospatial networks. To reduce visual clutter and reveal network patterns, we also propose a parametrizable 5-step 3D edge bundling algorithm and an approach to avoiding collision of network edges with the viewpoint. Our SIM and 3D edge bundling approaches have been implemented for an Oculus Rift environment. We demonstrate the usefulness of our approach with a case study on a real-world network dataset.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124027665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User guided 3D scene enrichment","authors":"Suiyun Zhang, Zhizhong Han, Hui Zhang","doi":"10.1145/3013971.3014002","DOIUrl":"https://doi.org/10.1145/3013971.3014002","url":null,"abstract":"Enriching 3D scenes with small objects is an important step for creating realistic scenes. It becomes tougher to involve user guidance to increase the variety of the scene enrichment results. To resolve this problem, we present a user-guided 3D indoor scene enrichment framework that helps users to effectively apply their rules for small-object arrangements. The enrichment problem can be divided into three parts: what categories of small objects should appear, where the small objects should be placed and how to arrange them on furniture objects. The first two questions are answered by statistical information learned from image datasets and the third question is answered by constructing a cost function considering both constraints proposed by our system and arrangement rules specified by users. Our experiments show that this framework can efficiently generate plausible scene enrichments that conform to the user-specified arrangement rules.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121251736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Illumination invariant mesh coloring","authors":"Weijian Cao, Shouhong Ding, Lizhuang Ma","doi":"10.1145/3013971.3014007","DOIUrl":"https://doi.org/10.1145/3013971.3014007","url":null,"abstract":"Many approaches have been proposed for reconstructing photorealistic texture mapping of 3D models with multi-view images. These models can properly represent the objects in fixed light conditions. However, they are unable to react to light changes. In light-changeable application scenarios, such as visual reality, relighting and 3D printing, models without diffuse reflection are highly demanded. In this paper, we present a non-photorealistic per-vertex illumination invariant mesh coloring method to couple intrinsic color information into mesh, which can reflect material properties more accurately. The proposed method consists of two steps: global optimal coloring and label-based intrinsic decomposition. Illumination effect is eliminated in decomposition procedure and color consistency is also implicitly considered. Experiments on 3D printing and model relighting demonstrated that our approach is versatile and practical.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115562253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interaction in marker-less augmented reality based on hand detection using leap motion","authors":"Juncheng Zhao, S. H. Soon","doi":"10.1145/3013971.3014022","DOIUrl":"https://doi.org/10.1145/3013971.3014022","url":null,"abstract":"In this paper, a novel interaction framework for marker-less Augmented Reality is introduced. Model-based detection is one of the solutions for marker-less Augmented Reality. Instead of a marker, human hand is used as a distinctive object on which the augmented object placed. Leap Motion, a Virtual Reality (VR) hand tracking device, is used to detect the hand in our interaction framework for marker-less Augmented Reality. 3D hand position, gesture and direction can be obtained by using Leap Motion and passed to Unity 3d game engine. The main task for this framework is to calibrate the actual hand and the virtual hand generated by computer so that they can overlay each other. With the help of marker-less Augmented Reality Framework a user can experience intuitive interaction with virtual object and natural occlusion which will be the core functionality for next generation game, education, user interface and industrial.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127243674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Point cloud registration by discrete spin image and normal alignment radial feature","authors":"Xudong Li, J. Liu, Huijie Zhao","doi":"10.1145/3013971.3013994","DOIUrl":"https://doi.org/10.1145/3013971.3013994","url":null,"abstract":"Point cloud registration is a 3D data processing procedure that stitches two or more point clouds together in environment modeling and other related fields. When modeling the plant in the nature environment accurately, the field of view of the scanner is usually so limited that it is hard to acquire the whole point cloud of the plant, so point cloud registration is necessary. The spin image describes the characteristics of point cloud and has great potential in the feature based point cloud registration. In this paper, we propose a registration algorithm based on Discrete Spin Image (DSI) combined with Normal Alignment Radial Feature (NARF), which improves the process of normal calculation and computational efficiency. It is robust under the influence of noise and density of point cloud. Experiments show that the registration speed is increased by at least 6 times and the registration accuracy is about two thirds of average distance of points in point cloud.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130154891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Wilson, William Kalescky, Ansel MacLaughlin, B. Sanders
{"title":"VR locomotion: walking > walking in place > arm swinging","authors":"P. Wilson, William Kalescky, Ansel MacLaughlin, B. Sanders","doi":"10.1145/3013971.3014010","DOIUrl":"https://doi.org/10.1145/3013971.3014010","url":null,"abstract":"There are many methods of exploring an HMD-based virtual environment such as using a game controller, physically walking, walking in place, teleporting, flying, leaning, etc. The purpose of this work is to introduce a simple method of implementing \"walking in place\" using a simple inexpensive accelerometer sensor. We then evaluate this method of walking in place by comparing it to normal walking and another previously published inexpensive method of exploration, arm swinging. In an experiment that compares the spatial awareness, we show that walking in place is not as good as walking on foot, but it is is better than arm swinging. Subjects also complete blind locomotion distance estimation trials in each of the locomotion conditions.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128899252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geodesic histogram based 3D deformable shape correspondence","authors":"Xiang Pan, Zhihao Cheng, J. Lin, Zhi Liu","doi":"10.1145/3013971.3013988","DOIUrl":"https://doi.org/10.1145/3013971.3013988","url":null,"abstract":"Geodesic distance has been widely used in building correspondence between two 3D shapes. This paper extends previous work and proposes a correspondence method by geodesic histogram. Geodesic histogram owns two distinct characteristics for correspondence. Firstly, it captures not only the length of two feature points, but also local geometrical features along with the geodesic path. Secondly, it can support partial correspondence since it is locally stable for incomplete 3D shape. Experimental results demonstrate that our algorithm is better able to produce a correct correspondence in the case of pose variation, deformation and even missing parts. Furthermore, statistical analysis shows the proposed method obtains an obvious improvement over the similar method in the literature.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122405106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. M. Nasir, T. Noma, Masaki Oshita, Kunio Yamamoto, M. S. Sunar, Shamsul Mohamad, Yasutaka Honda
{"title":"Simulating group formation and behaviour in dense crowd","authors":"F. M. Nasir, T. Noma, Masaki Oshita, Kunio Yamamoto, M. S. Sunar, Shamsul Mohamad, Yasutaka Honda","doi":"10.1145/3013971.3014017","DOIUrl":"https://doi.org/10.1145/3013971.3014017","url":null,"abstract":"This paper presents a technique to simulate large groups in a dense crowd, where the groups can change their formation, and continuously avoid collision with other individual agents and groups, but still try to keep their collective behaviour until they reach their destination. To achieve this, we use the leader-follower model where the leader determines the group path while other members, driven by the modified social force model (SFM), follow the leader, maintaining the group formation. We also use density of agents in the travelling direction as the criteria to determine the appropriate formation type. Our proposed technique is easily compatible with individual agents driven by the existing SFM at moderate costs.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123578988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Yang, Hubert P. H. Shum, N. Aslam, Lanling Zeng
{"title":"Temporal clustering of motion capture data with optimal partitioning","authors":"Yang Yang, Hubert P. H. Shum, N. Aslam, Lanling Zeng","doi":"10.1145/3013971.3014019","DOIUrl":"https://doi.org/10.1145/3013971.3014019","url":null,"abstract":"Motion capture data can be characterized as a series of multidimensional spatio-temporal data, which is recorded by tracking the number of key points in space over time with a 3-dimensional representation. Such complex characteristics make the processing of motion capture data a non-trivial task. Hence, techniques that can provide an approximated, less complicated representation of such data are highly desirable. In this paper, we propose a novel technique that uses temporal clustering to generate an approximate representation of motion capture data. First, we segment the motion in the time domain with an optimal partition algorithm so that the within-segment sum of squared error (WSSSE) is minimized. Then, we represent the motion capture data as the averages taken over all the segments, resulting in a representation of much lower complexity. Experimental results suggest that comparing with the compared methods, our proposed representation technique can better approximate the motion capture data.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130164819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}