Zhihan Lu, S. Réhman, Muhammad Sikandar Lal Khan, Haibo Li
{"title":"Anaglyph 3D Stereoscopic Visualization of 2D Video Based on Fundamental Matrix","authors":"Zhihan Lu, S. Réhman, Muhammad Sikandar Lal Khan, Haibo Li","doi":"10.1109/ICVRV.2013.59","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.59","url":null,"abstract":"In this paper, we propose a simple Anaglyph 3D stereo generation algorithm from 2D video sequence with monocular camera. In our novel approach we employ camera pose estimation method to directly generate stereoscopic 3D from 2D video without building depth map explicitly. Our cost effective method is suitable for arbitrary real-world video sequence and produces smooth results. We use image stitching based on plane correspondence using fundamental matrix. To this end we also demonstrate that correspondence plane image stitching based on Homography matrix only cannot generate better result. Furthermore, we utilize the structure from motion (with fundamental matrix) based reconstructed camera pose model to accomplish visual anaglyph 3D illusion. The proposed approach demonstrates a very good performance for most of the video sequences.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Laser Sheet Scanning Based Smoke Acquisition and Reconstruction","authors":"Xin Gao, Yong Hu, Qing Zuo, Yue Qi","doi":"10.1109/ICVRV.2013.15","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.15","url":null,"abstract":"This paper develops a laser sheet scanning based technique for capturing and reconstructing sequential volumetric models of smoke. First, a dedicated setup is introduced as the laser sheet illuminator in horizontal scanning. To achieve an accurate acquisition, a signal synchronized scheme is added between the galvanometer and the high-speed camera. Then, with a laser sheet sweeping through the volume repeatedly, the illuminated smoke slices are captured. Each sweep of the laser records a near-simultaneous smoke density field. In next reconstruction procedure, through camera and laser calibrations, 3D real positions of the pixels of captured images is calculated. Finally, these irregular smoke density fields are re sampled by a 3D original Kriging interpolation algorithm and are reconstructed to regular smoke volumetric models. In experimental results, the fidelity of visualized smoke volumetric models reconstructed by our smoke modeling method demonstrates that our approach can make a good effect on realistic smoke modeling.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Divide and Conquer Ray Tracing Algorithm Based on BVH Partition","authors":"Wu Zhefu, Yu Hong, Chen Bin","doi":"10.1109/ICVRV.2013.16","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.16","url":null,"abstract":"A new fast divide and conquer ray tracing algorithm based on BVH partition which can remove unnecessary rays in subspace is proposed to resolve the problem that bounding boxes not tightly surround the primitives by space divide scheme which then increased unnecessary rays in subspace. Its core idea is that using Bin based BVH construction algorithm to partition primitives into two parts then distributing primitives and rays into corresponding subspace using stream filter. If the number of rays and primitives which intersect a sub space meet some limit condition, the primitives and rays in the subspace then begin basic ray tracing. A comparison between divide conquer ray tracing algorithm using BVH and using space divide schemes such as Kd-tree, grid shows our method can reduce computing with unnecessary rays in subspace substantially and lead to faster performance significantly.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"321 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127506877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Depth Recovery Approach from Multi-View Stereo Based Focusing","authors":"Zhaolin Xiao, Heng Yang, Qing Wang, Guoqing Zhou","doi":"10.1109/ICVRV.2013.34","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.34","url":null,"abstract":"In this paper, we propose a novel depth recovery method from multi-view stereo based focusing. Inspired by the 4D light field theory, we discover the relationship between classical multi-view stereo (MVS) and depth from focus (DFF) methods and concern about different frequency distribution in 2D light field space. Then we propose a way to separate the depth recovery into two steps. At the first stage, we choose some depth candidates using existing multi-view stereo method. At the second phase, the depth from focusing algorithm is employed to determine the final depth. As well known, multi-view stereo and depth from focus need different kinds of input images, which can not be acquired at the same time by using traditional imaging system. We have addressed this issue by using a camera array system and synthetic aperture photography. Both multi-view images and distinct defocus blur images can be captured at the same time. Experimental results have shown that our proposed method can take advantages of MVS and DFF and the recovered depth is better than traditional methods.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114382616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Outliers Elimination Based Ransac for Fundamental Matrix Estimation","authors":"Shuqiang Yang, Biao Li","doi":"10.1109/ICVRV.2013.63","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.63","url":null,"abstract":"To accelerate the RANSAC process for fundamental matrix estimation, two special modifications about RANSAC are proposed. Firstly, in the verification stage, not the correspondences are used to verify the hypothesis but the singular values of estimated fundamental matrix are directly used to evaluate the effectiveness of the matrix. Secondly, after getting a plausible estimation, the obvious outliers are eliminated from the correspondences set. This process can enhance the inliers' ratio in the remaining correspondences set, which will accelerate the sample progress. We call our method as outlier elimination based RANSAC (OE-RANSAC). Experimental results both from synthetic and real data have testified the efficiency of OE-RANSAC.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129410184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Edge-Guided Depth Map Resampling for HEVC 3D Video Coding","authors":"Yi Yang, Jiangbin Zheng","doi":"10.1109/ICVRV.2013.29","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.29","url":null,"abstract":"Multi view video plus depth (MVD) format is considered to be essential in the next generation three-dimensional television (3DTV). The compression of this format is crucial. Depth images are featured by large homogeneous areas and sharp edges between objects. It has been observed that efficient compression can be achieved by a down-up sampling procedure as pre- and post-processing of video coding. We propose an edge-guided depth map re sampling method based on the above scheme, combine the edge information of both texture and depth images for edge preserving, extend image gradient domain reconstruction concept to depth up-sampling to form a linear equation system to find least square solution. Experimental results show that proposed method improves both depth map coding efficiency and synthesized view quality. Additionally, the up-scaling method can be used for depth super-resolution reconstruction of depth data captured by depth sensors, like Kinect or TOF cameras.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127515684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Saliency-Guided Luminance Enhancement for 3D Shape Depiction","authors":"W. Hao, Yinghui Wang","doi":"10.1109/ICVRV.2013.10","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.10","url":null,"abstract":"We present a novel saliency-guided shading scheme for 3D shape depiction incorporating the mesh saliency into luminance enhancement in this paper. Using a distanced-based mesh saliency computation, we propose a new perceptual-saliency measure which can depict surface salient regions. Due to the visual salient regions, we emphasize the details and the overall shape of models by locally enhancing the high-frequency of vertex luminance. The enhanced strength is not controlled by user whereas it is determined by the surface shapes. Experimental results demonstrate that our method displays satisfying results obtained with Phong shading, Gooch shading and cartoon shading. Compared to previous techniques, our approach can effectively improve shape depiction without impairing the desired appearance.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132510404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Potential Field Based Cooperative Particle Filter for Multi-View Multi-Object Tracking","authors":"Xiao-min Tong, Yanning Zhang, Tao Yang","doi":"10.1109/ICVRV.2013.20","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.20","url":null,"abstract":"To continuously track the multiple occluded object in the crowded scene, we propose a new multi-view multi-object tracking method basing on artificial potential field and cooperative particle filter in which we combine the bottom-up and top-down tracking methods for better tracking results. After obtaining the accurate occupancy map through the multi-planar consistent constraint, we predict the tracking probability map via cooperation among multiple particle filters. The main point is that multiple particle filters' cooperation is considered as the path planning and particles' random shifting is guided by the artificial potential field. Comparative experimental results with the traditional blob-detection-tracking algorithm demonstrate the effectiveness and robustness of our method.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121149454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Zhang, Guopeng Li, Jianjun Liu, X. Ren, Xingye Gao
{"title":"3D-Realtime-Monitor Syestem for Lunar Rover","authors":"P. Zhang, Guopeng Li, Jianjun Liu, X. Ren, Xingye Gao","doi":"10.1109/ICVRV.2013.51","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.51","url":null,"abstract":"The 3D-Real time-Monitor system is a real-time virtual reality system and it is consist of data sever and rendering client. As a real-time virtual reality system, 3D-Real time-Monitor is driven by the runtime telemeasuring data and it is integrated with kinematics and dynamics model of rover as well as real lunar surface terrain mode. The process of telemeasuring data is described in detail. Methods of modeling which are proposed in this paper include constructing lunar surface, constructing 3D model of lander and rover, building up kinematic model of rover body and building up wheel-terrain interaction mode. Photogrammetry technique and the remote sensing information are used to generate the terrain model of lunar surface. According to the implementation result, 3D-Real time-Monitor system is an effective assist system for making exploration plan and monitoring the status of rover.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"112 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116256426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Interactive Warping Method for Multi-channel VR Projection Display Systems with Quadric Surface Screens","authors":"Fang Sun, Weiliang Meng","doi":"10.1109/ICVRV.2013.9","DOIUrl":"https://doi.org/10.1109/ICVRV.2013.9","url":null,"abstract":"In this paper, we present a practical non-camera-based interactive warping method for multi-channel immersive VR projection display systems with quadric surface screens. Instead of using one or multiple cameras as most previous methods did, we employ a commercial theodolite and a mouse to interactively calibrate each projector on site. By taking advantage of the nature of shape of the curved screen, we are able to perform fast, robust projector calibration and compute the warping map for each projector by taking other system information into account, i.e., position/frustum of the designed eye point (DEP). Compared with camera-based solutions, our method is accurate, cost-effective, simple to operate, and can reduce system set-up time and complexity efficiently. The feasibility of our method has been verified by many real site installations.","PeriodicalId":179465,"journal":{"name":"2013 International Conference on Virtual Reality and Visualization","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116355679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}