{"title":"VRMController: an input device for navigation activities in virtual reality environments","authors":"Hai-Ning Liang, Yu-hui Shi, Feiyu Lu, Jizhou Yang, Konstantinos Papangelis","doi":"10.1145/3013971.3014005","DOIUrl":"https://doi.org/10.1145/3013971.3014005","url":null,"abstract":"Despite the rapid advancement of display capabilities of VR, in the form of wearable goggles like the Oculus for example, there has been relatively limited progress in the development of input devices for this technology. In this paper, we describe an input controller that is aimed at supporting users' navigation activities in virtual reality environments. Navigation is common in VR environments. The traditional game controller for consoles is still a common choice but only now companies are just beginning to introduce new concepts (for example the HTC Vive Controller). In this research we explore the development of an alternative input controller to support users' navigation activities. This process has led to the design and creation of VRMController, an input device based on a mobile phone designed specifically to support single-hand interaction within VR environments. The design VRMController is based on the results of an initial study comparing three input devices: an Xbox game controller, the HTC Vive controller, and a tablet device. Based on feedback from participants about the useful features of the three types of devices, we have distilled five design guidelines and used them to inform the development of VRMController. The results of a second study comparing our controller with an Xbox controller shows that with our controller participants are able to achieve better performance and find that it is easier to use and more usable.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123928464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Haptic mirror: a platform for active exploration of facial expressions and social interaction by individuals who are blind","authors":"S. Yasmin, S. Panchanathan","doi":"10.1145/3013971.3013999","DOIUrl":"https://doi.org/10.1145/3013971.3013999","url":null,"abstract":"To people who are visually impaired, sensory substitution requires conversion from distal visual data to proximal haptic cue. The challenge is to identify the optimum haptic cues that will make a common haptic \"language\", which can eventually be used to convey information about facial expressions to the individuals who are blind. Once this language has been developed, the next challenge is learning and training. We propose a dynamic haptic tool for simultaneous learning and interaction with a proposed haptic language. We call this application \"Haptic mirror\", an interactive haptic-based reflection of one's self which can be comparable to a visual mirror. Once the user has acquired this language, it can be used to convey the facial expressions of social peers.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114778685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DovCut: a draft based online video compositing system","authors":"Guangyu Nie, Yue Liu, Yongtian Wang","doi":"10.1145/3013971.3013980","DOIUrl":"https://doi.org/10.1145/3013971.3013980","url":null,"abstract":"The customization of the video endows users the ability to design the desired videos by themselves, however it is still a challenging task to achieve video composition up to now. In this paper, we design a draft based online video customization system (DovCut) that can convert the freehand draft into a realistic video. Based on a freehand draft in canvas with tags, the proposed system first searches the candidate videos online automatically to match the scene items and segments each candidate object from the background, then replaces the objects in the freehand draft into the realistic objects. By using a filtering scheme the proposed system excludes the undesirable video clips and selects the suitable videos automatically to generate a high quality composition video. Experimental results show the application potential of the proposed system.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115125891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on simulation system of welding robot in Unity3d","authors":"J. Pan, Yong Zhuo, Liang Hou, Xiangjian Bu","doi":"10.1145/3013971.3013982","DOIUrl":"https://doi.org/10.1145/3013971.3013982","url":null,"abstract":"In industrial application and teaching training, robot simulation plays an important role. In this paper, the method of system development is discussed. First the robot model and simulation scene model are built. And the kinematics model is established including positive and inverse kinematics. Then two kinds of path, line and circle, are planned about positon and orientation as well. And the pose of torch is discussed for welding. With the help of Unity3d's powerful engine, simulation effects are implemented. Finally, a welding robot simulation system is set up in Unity3d based on Virtual Reality technology. The simulation results show that the system is well applied in engineering and training.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haozhi Huang, Yanyan Liang, A. Tsoi, Sio-Long Lo, A. Leung
{"title":"A novel bagged particle filter for object tracking","authors":"Haozhi Huang, Yanyan Liang, A. Tsoi, Sio-Long Lo, A. Leung","doi":"10.1145/3013971.3013997","DOIUrl":"https://doi.org/10.1145/3013971.3013997","url":null,"abstract":"In this paper, we propose a novel bagged particle filter framework to filtering the noise information from object trackers using generative model as well as the discriminative model. The framework makes use of objectness measurement for modeling observation likelihood and two powerful object detectors: the real-time L1 tracker and the TLD tracker combined to bagged trackers. By maxmazing the posterior of the proposed inference, inaccuracy information is filtered and more accuracy result from varying samples returned by different trackers is provided by the bagged particle filter. The experiment results suggest that the proposed particle filter is effective in combing the complementary nature of either the sparse tracking approach and the discriminative learning approach.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129351931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wuyang Shui, Jin Liu, Pu Ren, S. Maddock, Mingquan Zhou
{"title":"Automatic planar shape segmentation from indoor point clouds","authors":"Wuyang Shui, Jin Liu, Pu Ren, S. Maddock, Mingquan Zhou","doi":"10.1145/3013971.3014008","DOIUrl":"https://doi.org/10.1145/3013971.3014008","url":null,"abstract":"The use of a terrestrial laser scanner (TLS) has become a popular technique for the acquisition of 3D scenes in architecture and design. Surface reconstruction is used to generate a digital model from the acquired point clouds. However, the model often consists of excessive data, limiting real-time user experiences that make use of the model. In this study, we present a coarse to fine planar shape segmentation method for indoor point clouds, which results in the digital model of an indoor scene being represented by a small number of planar patches. First, the Gaussian map and region growing techniques are used to coarsely segment the planar shape from sampled point clouds. Then, the best-fit-plane is calculated by random sample consensus (RANSAC), avoiding the negative impact of outliers. Finally, the refinement of planar shape is produced by projecting point clouds onto the corresponding bestfit-plane. Our method has been demonstrated to be robust towards noise and outliers in the scanned point clouds and overcomes the limitations of over- and under-segmentation. We have tested our system and algorithms on real datasets and experiments show the reliability of the proposed method against existing region-growing methods.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130096645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facial expressions recognition based on convolutional neural networks for mobile virtual reality","authors":"Teng Teng, Xubo Yang","doi":"10.1145/3013971.3014025","DOIUrl":"https://doi.org/10.1145/3013971.3014025","url":null,"abstract":"We present a new system designed for enabling direct face-to-face interaction for users wearing a head-mounted displays (HMD) in virtual reality environment. Due to HMD's occlusion of a user's face, VR applications and games are mainly designed for single user. Even in some multi-player games, players can only communicate with each other using audio input devices or controllers. To address this problem, we develop a novel system that allows users to interact with each other using facial expressions in real-time. Our system consists of two major components: an automatic tracking and segmenting face processing component and a facial expressions recognizing component based on convolutional neural networks (CNN). First, our system tracks a specific marker on the front surface of the HMD and then uses the extracted spatial data to estimate face positions and rotations for mouth segmentation. At last, with the help of an adaptive approach for histogram based mouth segmentation [Panning et al. 2009], our system passes the processed lips pixels' information to CNN and get the facial expressions results in real-time. The results of our experiments show that our system can effectively recognize the basic expressions of users.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122913759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FPGA design & implementation of a very-low-latency video-see-through (VLLV) head-mount display (HMD) system for mixed reality (MR) applications","authors":"T. Ai","doi":"10.1145/3013971.3014020","DOIUrl":"https://doi.org/10.1145/3013971.3014020","url":null,"abstract":"There has been a trend of increasing scales in many Augmented Reality (AR) and Mixed Reality (MR) applications, in both capturing size of the environment using SLAM, or displaying Field of View (FOV) of the digital imageries. However, the Optical See-Through (OST) methods have limited FOV and involve complex design and fabrication. Video See-Through (VST) Head-Mount Display (HMD), on the other hand, has much larger FOV and is easier/cheaper to manufacture. Moreover, it is relatively easy to make virtual objects occlude world objects in video stream. But the drawbacks is that a huge lag is imposed on all contents (i.e. world imagery and digital content) due to video capturing, processing, and rendering. In this paper, we present a system that implements a stereo VST HMD with world imagery that has high quality (2560×1440 @ 90fps) and low latency (≤30ms). The system utilizes an FPGA that splits the world imagery stream into two datapaths, for high-resolution displaying and low-resolution processing. Thus the SLAM algorithm running on the connected computer is performed on a down-sampled video stream for overlaying digital objects. Before being displayed, the processed video from the computer is synchronized and fused with the high-resolution world imagery.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132785031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating virtual reality experience and performance: a brain based approach","authors":"M. Pike, Eugene Ch’ng","doi":"10.1145/3013971.3014012","DOIUrl":"https://doi.org/10.1145/3013971.3014012","url":null,"abstract":"The recent trend and parallel development/adoption of Virtual Reality, Brain Sensing Measures and associated technology such as Augmented Reality by large corporations, and the rise in the interests in the consumer market have set a positive tone for research in these disciplines. An important human factors area that is a catalyst to broad VR applications is the measure of perception, mental workload, and immersion amongst other issues, which are determining factors in the experience of using virtual environments. Traditional approaches in studying these issues use well-developed subjective measures via questionnaires. A new opportunity in the parallel developments in wearable physiological sensors such as brain scanners could potentially be an objective approach in resolving many subjective uncertainties amongst other prospects. Here, we propose the integration of these two emerging fields in order to provide a continuous, objective, physiological measure of an individual's VR experience for the purposes of enhancing user experience and improving performance. This positional paper attempts to merge two complementary field of work, and discusses implications which could potentially open up avenues of research which were traditionally difficult due to the limitations of equipment, or the lack of quantified approach.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123336854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An improved ANN search algorithm for visual search applications","authors":"Fuqiang Ma, Jing Chen, Yanfeng Tong, Lei Sun","doi":"10.1145/3013971.3014011","DOIUrl":"https://doi.org/10.1145/3013971.3014011","url":null,"abstract":"Approximate nearest neighbor search is a kind of significant algorithm to ensure the accuracy and speed for visual search system. In this paper, we ameliorate the search algorithm following the framework of product quantization. Product quantization can generate an exponentially large codebook by a product quantizer and then achieve rapid search with the asymmetric distance computation or symmetric distance computation, while it will still produce a larger distortion in some cases when calculating the approximate distance. Therefore, we design the hierarchical residual product quantization which simultaneously quantifies the input and residual space and meanwhile we extend the asymmetric distance computation to handle this quantization method which is still very efficient to estimate the approximate distance. We have tested our method on several datasets, and the experiment shows that our method consistently improves the accuracy against the-state-of-the-art methods.","PeriodicalId":269563,"journal":{"name":"Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123643114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}