2011 International Conference on Virtual Reality and Visualization最新文献

筛选
英文 中文
Multi-view Stereo Reconstruction for Internet Photos 互联网照片的多视角立体重建
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.22
Sijiao Yu, Yue Qi, Xukun Shen
{"title":"Multi-view Stereo Reconstruction for Internet Photos","authors":"Sijiao Yu, Yue Qi, Xukun Shen","doi":"10.1109/ICVRV.2011.22","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.22","url":null,"abstract":"This paper develops a multi-view stereo approach to reconstruct the shape of a 3D object from a set of Internet photos. The stereo matching technique adopts region growing approach, starting from a set of sparse 3D points reconstructed from structure-from-motion (Sfm), then propagates to the neighbouring areas by the best-first strategy, and produces dense 3D points. View selection and filter algorithms are proposed considering the characteristics of Internet images. Specifically, for each 3D point, we choose a reference image at first which determines the right subset images from unordered sets for optimization. Two filter algorithms, namely Sfm points filter and quality filter which is based on the assumption that depth changes smoothly, are designed to eliminate low-quality reconstructions. We demonstrate our algorithms with several datasets which show that they perform robustly.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115226105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internet Video Search and Repurposing through Face Analysis 基于人脸分析的网络视频搜索与再利用
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.58
Xin Sun, Xiao Zhang, Shan Wang, Ke-yan Liu, Tong Zhang
{"title":"Internet Video Search and Repurposing through Face Analysis","authors":"Xin Sun, Xiao Zhang, Shan Wang, Ke-yan Liu, Tong Zhang","doi":"10.1109/ICVRV.2011.58","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.58","url":null,"abstract":"With the ever-growing amount of videos on the internet, searching for desired videos in an effective and efficient way remains a challenge. In addition, repurposing videos of interest into new attractive photo/video products has been an open issue. In this paper, we propose a framework for video retrieval and repurposing by leveraging the face information in videos. Since text query cannot express user's intent precisely and it may generate noisy result, an automatic query image generation method is proposed to provide user with visual and intuitive candidate images. Then, based on user-selected query image, videos are re-ranked through content analysis so that videos with higher relevance can emerge to the top. Furthermore, relevant segments and frames are obtained and they can be used to compose customized photo/video products, which provides user with a fresh experience as a creator. In this paper, a prototype of video retrieval and repurposing system is implemented for retrieving videos of celebrities and generating customized products. Experiments have been done on 3416 video clips related to about 103 celebrities downloaded from YouTube. The experimental results show the effectiveness of our proposed method.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125589203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Curvature-Constrained Feature Graph Extraction 曲率约束特征图提取
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.31
Li-ru Han
{"title":"Curvature-Constrained Feature Graph Extraction","authors":"Li-ru Han","doi":"10.1109/ICVRV.2011.31","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.31","url":null,"abstract":"this paper proposes a shape descriptor with a feature graph to highlight both topological structure and geometric features of 3D mesh model. Firstly, the geodesic distance for the computation of an invariant mapping function on a 3D mesh model is adopted to obtain a Reeb graph (RG) skeleton. Secondly, discrete curvature values on the mesh vertices are analyzed to detect the topological changes and to specify the articulated details. Finally, new nodes denoting the articulation features are extracted and used for adaptively updating the original Reeb graph. The enhanced feature graph provides an affine-invariant and visually meaningful skeleton of arbitrary topological shape in a reasonable execution time. A series of experiments has been implemented and shown the robustness and efficiency of the proposed algorithm.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cybernetics Based Model of Forces with Genotypes 基于控制论的基因型力模型
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.17
Zhang Wei, Zeng Liang, Li Sikun, Yueshan Xiong, Wanying Xu
{"title":"Cybernetics Based Model of Forces with Genotypes","authors":"Zhang Wei, Zeng Liang, Li Sikun, Yueshan Xiong, Wanying Xu","doi":"10.1109/ICVRV.2011.17","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.17","url":null,"abstract":"In this paper, we propose a cybernetics-based model of forces that integrates personality within it, called GSCP, which can well simulate how the psychological factors impact on the behaviors of entities. The modeling is achieved by introducing a genotype sequence to embed personality of entities into behavior decision-making. In order to decide the next behavior, a decision-making mechanism called Action Selection Logic (ASL) is proposed. In the logic, a concept of potential energy which comes from physics is introduced to describe the motivation of an entity. The task of ASL is to make sure that entities always take the actions which lead to the lowest point of the potential energy surface, which is also called a steady state in cybernetics. Personalities and emotion factors of entities are modeled as genotype sequences which affect the motivation intensity of entities, which ultimately acting upon the choice of actions through ASL. The concept model and the architecture of the GSCP model are presented, and the modeling approach is described in formal language strictly. Finally, simulation results show that entities with different emotions would choose different behaviors, which proves that individual emotional factors have actual impact on motivations and behaviors. Furthermore, the GSCP does not need to maintain a large number of state and mapping rules, and can avoid combinatorial explosion successfully, which make it more flexible and reliable.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121372721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Deformation and Crack Simulation of Plastic Thin Shell 塑性薄壳的局部变形与裂纹模拟
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.64
Bo Wu, Jiangfan Ning, Jiawen Ma, L. Zeng, Sikun Li
{"title":"Local Deformation and Crack Simulation of Plastic Thin Shell","authors":"Bo Wu, Jiangfan Ning, Jiawen Ma, L. Zeng, Sikun Li","doi":"10.1109/ICVRV.2011.64","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.64","url":null,"abstract":"We present a mesh less method for local deformation and crack simulation of plastic thin shell. Although previous mesh less methods have done the similar simulations, there exists a problem that the moment matrix may be singular. And the result is that the shape function cannot be constructed to finish the simulation. Special work is needed to deal with the problem. In this paper, we propose a mesh less method ¨C Local Radial Basis Point Interpolation Method (LRPIM) to carry out the simulation. The shape function is constructed using the radial basis function and it guarantees that the moment is nonsingular without any assistance. Results show that our method is feasible and effective.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128169486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Augmented Reality Ping-Pong Via Markerless Real Rackets 协作增强现实乒乓球通过无标记的真实球拍
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.63
Yong Yan, Xiaowu Chen, Xin Li
{"title":"Collaborative Augmented Reality Ping-Pong Via Markerless Real Rackets","authors":"Yong Yan, Xiaowu Chen, Xin Li","doi":"10.1109/ICVRV.2011.63","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.63","url":null,"abstract":"This article proposes a method of constructing a ping-pong system via marker less real rackets in collaborative augmented reality. Except a pair of video cameras, without any other sensors or artificial markers, users can use real rackets to hit virtual ping-pong ball on a virtual table and interact with remote partners in augmented reality scene just as they were playing ping-pong in the same place. First, the real racket can be detected and tracked in real-time in the video captured by a single camera in each site. By 3D registration, the real racket can seamlessly interact with the virtual ping-pong ball and table. Then, a communication scheme is designed for the consistent perception between users in collaborative augmented reality ping-pong system. To achieve real-time interaction, the whole method is implemented in a parallel computing environment through multi-core processors. Experimental results demonstrate that our system can provide consistent perception and natural user interaction with low latency and high precision.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132072401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Image Based Experiment Scene Builder for Virtual Educational Experiments 基于图像的虚拟教育实验实验场景构建
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.62
Changjian Chi, Xiaowu Chen, Ziqiang Yang, Guodong Jia
{"title":"Image Based Experiment Scene Builder for Virtual Educational Experiments","authors":"Changjian Chi, Xiaowu Chen, Ziqiang Yang, Guodong Jia","doi":"10.1109/ICVRV.2011.62","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.62","url":null,"abstract":"Realism and simplicity of virtual experiment scene building can influence the usage of virtual educational experiment applications. This paper proposes an image based method of experiment scene building for Virtual Educational Experiments. Firstly, virtual educational experiment components are extracted from images of experiment instrument by using the interactive method of object extraction. Then, the corresponding logical model descriptions are attached to the extracted components. Finally, the components required in a specific experiment are inserted into the selected virtual experiment scene and are connected interactively. A routing algorithm based on equivalent connection line is designed to make the generated line connections between components clear and legible. A virtual experiment scene building system, named VE Scene builder, has been implemented and widely used. The feedbacks demonstrated that our system can rapidly generate realistic and legible virtual experiment scene.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"218 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134525080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fuzzy Feature Visualization of Vector Field by Entropy-Based Texture Adaptation 基于熵的纹理自适应矢量场模糊特征可视化
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.41
Huaihui Wang, Huaxun Xu, L. Zeng, Sikun Li
{"title":"Fuzzy Feature Visualization of Vector Field by Entropy-Based Texture Adaptation","authors":"Huaihui Wang, Huaxun Xu, L. Zeng, Sikun Li","doi":"10.1109/ICVRV.2011.41","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.41","url":null,"abstract":"Texture control is a challenging issue in texture-based feature visualization. In order to visualize as more information as we can, this paper presents a texture adaptation technique for fuzzy feature visualization of 3D vector field, taking into account information quantity carried by vector field and texture based on extended information entropy. Two definitions of information measurement for 3D vector field and noise texture, MIE and RNIE, are proposed to quantitatively represent the information carried by them. A noise generation algorithm based on three principles derived from minimal differentia of MIE and RNIE is designed to obtain an approximately optimal distribution of noise fragments which shows more details than those used before. A discussion of results is included to demonstrate our algorithm which leads to a more reasonable visualization results based on fuzzy feature measurement and information quantity.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131641213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Leaf Vein and Contour Extraction from Point Cloud Data 基于点云数据的叶脉和轮廓提取
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.40
Zhihui Sun, Shenglian Lu, Xinyu Guo, Yuan Tian
{"title":"Leaf Vein and Contour Extraction from Point Cloud Data","authors":"Zhihui Sun, Shenglian Lu, Xinyu Guo, Yuan Tian","doi":"10.1109/ICVRV.2011.40","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.40","url":null,"abstract":"Venation and contour of plant leaves are significant for many agronomic applications, such as the identification of plant species, the exploration of genetic relationship among plants and the 3D shape reconstruction of leaves. This paper presents a method for extracting leaf vein and contour from point cloud data. Firstly, leaf veins were extracted by using the curvature information of the point cloud, then the mesh model of plant leaf was constructed and the leaf contour was also extracted using mesh algorithm. The final leaf vein and contour can be obtained after a process combined fitting and repairing to the above extracted leaf veins and edges. The experimental results demonstrate that this method could extract the leaf veins and contour from laser scanned point cloud data.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133625509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
An Adaptive Sampling Based Parallel Volume Rendering Algorithm 一种基于自适应采样的并行体绘制算法
2011 International Conference on Virtual Reality and Visualization Pub Date : 2011-11-04 DOI: 10.1109/ICVRV.2011.61
Huawei Wang, Li Xiao, Yi Cao
{"title":"An Adaptive Sampling Based Parallel Volume Rendering Algorithm","authors":"Huawei Wang, Li Xiao, Yi Cao","doi":"10.1109/ICVRV.2011.61","DOIUrl":"https://doi.org/10.1109/ICVRV.2011.61","url":null,"abstract":"In this paper, a parallel ray-casting volume rendering algorithm based on adaptive sampling is presented for visualizing TB-scale time-varying scientific data. The algorithm samples a data field adaptively according to its inner variation, and thus sets sampling points only in important positions. In order to integrate adaptive sampling into the parallel rendering framework, an efficient method is proposed to handle the resulting unstructured sampling data. The experiments demonstrate that the proposed algorithm can be used to effectively render inner data features in high quality.","PeriodicalId":239933,"journal":{"name":"2011 International Conference on Virtual Reality and Visualization","volume":"426 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116277274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信