IEEE Conference on Visual Analytics Science and Technology最新文献

筛选
英文 中文
VSim: Real-time Visualization of 3D Digital Humanities Content for Education and Collaboration 用于教育和协作的三维数字人文内容的实时可视化
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/129-135
E. Poyart, Lisa M. Snyder, Scott Friedman, P. Faloutsos
{"title":"VSim: Real-time Visualization of 3D Digital Humanities Content for Education and Collaboration","authors":"E. Poyart, Lisa M. Snyder, Scott Friedman, P. Faloutsos","doi":"10.2312/VAST/VAST11/129-135","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/129-135","url":null,"abstract":"This paper presents VSim, a framework for the visualization of 3D architectural and archeological models. VSim's design focuses on educational use and scholarly collaboration, an approach that is not commonly found in existing commercial software. Two different camera control modes address a variety of scenarios, and a novel smoothing method allows fluid camera movement. VSim includes the ability to create and display narratives within the virtual environment and to add spatially localized multimedia resources. A new way to associate these resources with points and orientations in space is also introduced.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127917393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Dense 3D Point Cloud Generation from Multiple High-resolution Spherical Images 从多个高分辨率球面图像生成密集的3D点云
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/017-024
A. Pagani, C. Gava, Yan Cui, B. Krolla, Jean-Marc Hengen, D. Stricker
{"title":"Dense 3D Point Cloud Generation from Multiple High-resolution Spherical Images","authors":"A. Pagani, C. Gava, Yan Cui, B. Krolla, Jean-Marc Hengen, D. Stricker","doi":"10.2312/VAST/VAST11/017-024","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/017-024","url":null,"abstract":"The generation of virtual models of cultural heritage assets is of high interest for documentation, restoration, development and promotion purposes. To this aim, non-invasive, easy and automatic techniques are required. We present a technology that automatically reconstructs large scale scenes from panoramic, high-resolution, spherical images. The advantage of the spherical panoramas is that they can acquire a complete environment in one single image. We show that the spherical geometry is more suited for the computation of the orientation of the panoramas (Structure from Motion) than the standard images, and introduce a generic error function for the epipolar geometry of spherical images. We then show how to produce a dense representation of the scene with up to 100 million points, that can serve as input for meshing and texturing software or for computer aided reconstruction. We demonstrate the applicability of our concept with reconstruction of complex scenes in the scope of cultural heritage documentation at the Chinese National Palace Museum of the Forbidden City in Beijing.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130705984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Automatic Coin Classification by Image Matching 基于图像匹配的自动硬币分类
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/065-072
S. Zambanini, M. Kampel
{"title":"Automatic Coin Classification by Image Matching","authors":"S. Zambanini, M. Kampel","doi":"10.2312/VAST/VAST11/065-072","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/065-072","url":null,"abstract":"This paper presents an automatic image-based ancient coin classification method that adopts the recently proposed SIFT flow method in order to assess the similarity of coin images. Our system does not rely on pattern classification as discriminative feature extraction and classification becomes very difficult for large coin databases. This is mainly caused by the specific challenges that ancient coins pose to a classification method based on 2D images. In this paper we highlight these challenges and argue to use SIFT flow image matching. Our classification system is applied to an image database containing 24 classes of early Roman Republican coinage and achieves a classification rate of 74% on the coins' reverse side. This is a significant improvement over an earlier proposed coin matching method based on interest point matching which only achieves 33% on the same dataset.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125169167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A Repository for Heterogeneous and Complex Digital Cultural Objects 异构和复杂的数字文化对象的存储库
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/081-087
A. Felicetti, F. Niccolucci
{"title":"A Repository for Heterogeneous and Complex Digital Cultural Objects","authors":"A. Felicetti, F. Niccolucci","doi":"10.2312/VAST/VAST11/081-087","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/081-087","url":null,"abstract":"The paper proposes a solution for a repository of digital cultural objects, which can manage complex data as 3D objects, videos and more, together with the related metadata. The repository is built with open source components and may be easily installed and managed. Basing on an example, interfaces are shown for the most common operations. The system allows for text searches, semantic searches as well as facet refinements. The proposed system can support a full-featured digital library for its modularity and easy personalization.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116356571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time Rendering of Massive Unstructured Raw Point Clouds using Screen-space Operators 使用屏幕空间算子的大规模非结构化原始点云的实时渲染
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/105-112
R. Pintus, E. Gobbetti, Marco Agus
{"title":"Real-time Rendering of Massive Unstructured Raw Point Clouds using Screen-space Operators","authors":"R. Pintus, E. Gobbetti, Marco Agus","doi":"10.2312/VAST/VAST11/105-112","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/105-112","url":null,"abstract":"Nowadays, 3D acquisition devices allow us to capture the geometry of huge Cultural Heritage (CH) sites, historical buildings and urban environments. We present a scalable real-time method to render this kind of models without requiring lengthy preprocessing. The method does not make any assumptions about sampling density or availability of normal vectors for the points. On a frame-by-frame basis, our GPU accelerated renderer computes point cloud visibility, fills and filters the sparse depth map to generate a continuous surface representation of the point cloud, and provides a screen-space shading term to effectively convey shape features. The technique is applicable to all rendering pipelines capable of projecting points to the frame buffer. To deal with extremely massive models, we integrate it within a multi-resolution out-of-core real-time rendering framework with small pre-computation times. Its effectiveness is demonstrated on a series of massive unstructured real-world Cultural Heritage datasets. The small precomputation times and the low memory requirements make the method suitable for quick onsite visualizations during scan campaigns.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128925336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Linking Evidence with Heritage Visualization using a large Scale Collaborative Interface 使用大规模协作界面将证据与遗产可视化联系起来
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/121-128
Kim Bale, D. Abbott, Ramy Gowigati, D. Pritchard, P. Chapman
{"title":"Linking Evidence with Heritage Visualization using a large Scale Collaborative Interface","authors":"Kim Bale, D. Abbott, Ramy Gowigati, D. Pritchard, P. Chapman","doi":"10.2312/VAST/VAST11/121-128","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/121-128","url":null,"abstract":"The virtual reconstruction of heritage sites and artefacts is a complicated task that requires researchers to gather and assess many different types of historical evidence which can vary widely in accuracy, authority, completeness, interpretation and opinion. It is now acknowledged that elements of speculation, interpretation and subjectivity form part of 3D reconstruction using primary research sources. Ensuring transparency in the reconstruction process and therefore the ability to evaluate the purpose, accuracy and methodology of the visualization is of great importance. Indeed, given the prevalence of 3D reconstruction in recent heritage research, methods of managing and displaying reconstructions alongside their associated metadata and sources has become an emerging area of research. In this paper, we describe the development of techniques that allow research sources to be added as multimedia annotations to a 3D reconstruction of the British Empire Exhibition of 1938. By connecting a series of wireless touchpad PCs with an embedded webserver we provide users with a unique collaborative interface for semantic description and placement of objects within a 3D scene. Our interface allows groups of users to simultaneously create annotations, whilst also allowing them to move freely within a large display visualization environment. The development of a unique, life-size, stereo visualization of this lost architecture with spatialised semantic annotations enhances not only the engagement with and understanding of this significant event in history, but the accountability of the research process itself.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129327059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Reconstructing and Exploring Massive Detailed Cityscapes 重建和探索大量详细的城市景观
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/001-008
E. Gobbetti, F. Marton, Marco Di Benedetto, F. Ganovelli, Matthias Bühler, S. Schubiger-Banz, M. Specht, C. Engels, L. Gool
{"title":"Reconstructing and Exploring Massive Detailed Cityscapes","authors":"E. Gobbetti, F. Marton, Marco Di Benedetto, F. Ganovelli, Matthias Bühler, S. Schubiger-Banz, M. Specht, C. Engels, L. Gool","doi":"10.2312/VAST/VAST11/001-008","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/001-008","url":null,"abstract":"We present a state-of-the-art system for obtaining and exploring large scale three-dimensional models of urban landscapes. A multimodal approach to reconstruction fuses cadastral information, laser range data, and oblique imagery into building models, which are then refined by applying procedural rules for replacing textures with 3D elements, such as windows and doors, therefore enhancing the model quality and adding semantics to the model. For city scale exploration, these detailed models are uploaded to a web-based service, which automatically constructs an approximate scalable multiresolution representation. This representation can be interactively transmitted and visualized over the net to clients ranging from graphics PCs to web-enabled portable devices. The approach's characteristics and performance are illustrated using real-world city-scale data.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130194217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Methodology for the Physically Accurate Visualisation of Roman Polychrome Statuary 罗马彩色雕像物理精确可视化的方法
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/137-144
Gareth Beale, G. Earl
{"title":"A Methodology for the Physically Accurate Visualisation of Roman Polychrome Statuary","authors":"Gareth Beale, G. Earl","doi":"10.2312/VAST/VAST11/137-144","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/137-144","url":null,"abstract":"This paper describes the design and implementation of a methodology for the visualisation and hypothetical virtual reconstruction of Roman polychrome statuary for research purposes. The methodology is intended as an attempt to move beyond visualisations which are simply believable towards a more physically accurate approach. Accurate representations of polychrome statuary have great potential utility both as a means of illustrating existing interpretations and as a means of testing and revising developing hypotheses. The goal of this methodology is to propose a pipeline which incorporates a high degree of physical accuracy whilst also being practically applicable in a conventional archaeological research setting. The methodology is designed to allow the accurate visualisation of surviving objects and colourants as well as providing reliable methods for the hypothetical reconstruction of elements which no longer survive. The process proposed here is intended to limit the need for specialist recording equipment, utilising existing data and those data which can be collected using widely available technology. It is at present being implemented as part of the 'Statues in Context' project at Herculaneum and will be demonstrated here using the case study of a small area of the head of a painted female statue discovered at Herculaneum in 2006.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128563152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
WebGL-based Streaming and Presentation Framework for Bidirectional Texture Functions 基于webgl的双向纹理函数流和表示框架
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/113-120
Christopher Schwartz, R. Ruiters, Michael Weinmann, R. Klein
{"title":"WebGL-based Streaming and Presentation Framework for Bidirectional Texture Functions","authors":"Christopher Schwartz, R. Ruiters, Michael Weinmann, R. Klein","doi":"10.2312/VAST/VAST11/113-120","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/113-120","url":null,"abstract":"Museums and Cultural Heritage institutions have a growing interest in presenting their collections to a broader community via the Internet. The photo-realistic presentation of interactively inspectable digital 3D replicas of artifacts is one of the most challenging problems in this field. For this purpose, we seek not only a 3D geometry but also a powerful material representation capable of reproducing the full visual appeal of an object. In this paper, we propose a WebGL-based presentation framework in which reflectance information is represented via Bidirectional Texture Functions. Our approach works out-of-the-box in modern web browsers and allows for the progressive transmission and interactive rendering of digitized artifacts consisting of 3D geometry and reflectance information. We handle the huge amount of data needed for this representation by employing a novel progressive streaming approach for BTFs which allows for the smooth interactive inspection of a steadily improving version during the download.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122538084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Point Cloud Segmentation for Cultural Heritage Sites 文物遗址点云分割
IEEE Conference on Visual Analytics Science and Technology Pub Date : 2011-10-18 DOI: 10.2312/VAST/VAST11/041-048
Sandro Spina, K. Debattista, Keith Bugeja, A. Chalmers
{"title":"Point Cloud Segmentation for Cultural Heritage Sites","authors":"Sandro Spina, K. Debattista, Keith Bugeja, A. Chalmers","doi":"10.2312/VAST/VAST11/041-048","DOIUrl":"https://doi.org/10.2312/VAST/VAST11/041-048","url":null,"abstract":"Over the past few years, the acquisition of 3D point information representing the structure of real-world objects has become common practice in many areas. This is particularly true in the Cultural Heritage (CH) domain, where point clouds reproducing important and usually unique artifacts and sites of various sizes and geometric complexities are acquired. Specialized software is then usually used to process and organise this data. This paper addresses the problem of automatically organising this raw data by segmenting point clouds into meaningful subsets. This organisation over raw data entails a reduction in complexity and facilitates the post-processing effort required to work with the individual objects in the scene. This paper describes an efficient two-stage segmentation algorithm which is able to automatically partition raw point clouds. Following an intial partitioning of the point cloud, a RanSaC-based plane fitting algorithm is used in order to add a further layer of abstraction. A number of potential uses of the newly processed point cloud are presented; one of which is object extraction using point cloud queries. Our method is demonstrated on three point clouds ranging from 600K to 1.9M points. One of these point clouds was acquired from the pre-historic temple of Mnajdra consistsing of multiple adjacent complex structures.","PeriodicalId":168094,"journal":{"name":"IEEE Conference on Visual Analytics Science and Technology","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123044000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信