Proceedings of the 27th International Conference on 3D Web Technology最新文献

筛选
英文 中文
The Keys to an Open, Interoperable Metaverse 打开可互操作的元宇宙的钥匙
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564575
Anita Havele, Nicholas F. Polys, W. Benman, D. Brutzman
{"title":"The Keys to an Open, Interoperable Metaverse","authors":"Anita Havele, Nicholas F. Polys, W. Benman, D. Brutzman","doi":"10.1145/3564533.3564575","DOIUrl":"https://doi.org/10.1145/3564533.3564575","url":null,"abstract":"The term ‘Metaverse’ has taken on new interest recently, appearing prominently in the marketing materials of a number of large technology companies. Indeed, many have attempted, or are attempting, to co-opt it for their own purposes, which has resulted in a great deal of confusion among producers and consumers in the marketplace. With this paper, the Web3D Consortium seeks to address this confusion by exploring the history of the ‘Metaverse’, provide a workable definition of the term ‘Metaverse’, and provide a vision for its sustainable, cooperative construction into the future. We believe that all the technologies are in place to fulfill the vision of an open, equitable, and ubiquitous information space. What remains are the key issues that have kept the Metaverse from manifesting the last two decades: poor user experience and poor corporate cooperation.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115426471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Document Segmentation for WebAR application WebAR应用的文档分割
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564570
Thibault Lelong, M. Preda, T. Zaharia
{"title":"Document Segmentation for WebAR application","authors":"Thibault Lelong, M. Preda, T. Zaharia","doi":"10.1145/3564533.3564570","DOIUrl":"https://doi.org/10.1145/3564533.3564570","url":null,"abstract":"In recent years, we have witnessed the appearance of consumer applications of Augmented Reality (AR) available natively on smartphones. More recently, these applications are also implemented in web browsers. Among various AR applications, a simple one consisting in detecting a target object filmed by the phone and trigger an event following the detection. The target object can be of any kind, including 3D objects or simpler documents and printed pictures. The underlying process consists in comparing the image captured by the camera with large scale image remote database. The goal is then to display new content over the target object, by keeping the 3D spatial registration. When the target object is a document (or printed picture), the image captured by the camera contains, in many cases, a lot of useless information (such as the background). It is therefore more optimal to segment the captured image and send only to the server the representation of the target object. In this paper, we propose a deep-learning (DL) based method for fast detection and segmentation of printed documents within natural images. The goal is to provide a light and fast DL model to be used directly in the web browser, on mobile devices. We designed a compact and fast DL architecture, allowing to keep the same accuracy as the reference architecture, but dividing the inference time by 3 and the number of parameters by 10.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129918433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges in Applying Deep Learning to Augmented Reality for Manufacturing 将深度学习应用于制造业增强现实的挑战
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564572
Hugo Durchon, Marius Preda, T. Zaharia, Yannick Grall
{"title":"Challenges in Applying Deep Learning to Augmented Reality for Manufacturing","authors":"Hugo Durchon, Marius Preda, T. Zaharia, Yannick Grall","doi":"10.1145/3564533.3564572","DOIUrl":"https://doi.org/10.1145/3564533.3564572","url":null,"abstract":"Augmented Reality (AR) for industry has become a significant research area because of its potential benefits for operators and factories. AR tools could help to collect data, create standardized representations of industrial procedures, guide operators in real-time during operations, assess factory efficiency, and elaborate personalized training and coaching systems. However, AR is not yet widely deployed in industries, and this is due to several factors: hardware, software, user acceptance, and companies’ constraints. One of the causes we have identified in our factory is the poor user experience when using AR assistance software. We argue that adding computer vision and deep learning (DL) algorithms into AR assistance software could improve the quality of interactions with the user, handle dynamic environments, and facilitate AR adoption. We conduct a preliminary experiment aiming to perform 3D pose estimation of a boiler with MobileNetv2 in an uncontrolled industrial environment. This experiment produces insufficient results that cannot be directly used but allow us to establish a list of challenges and perspectives for future work.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new database for image retrieval of camera filmed printed documents 一种用于照相机拍摄的印刷文件图像检索的新数据库
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564569
Thibault Lelong, M. Preda, T. Zaharia
{"title":"A new database for image retrieval of camera filmed printed documents","authors":"Thibault Lelong, M. Preda, T. Zaharia","doi":"10.1145/3564533.3564569","DOIUrl":"https://doi.org/10.1145/3564533.3564569","url":null,"abstract":"The massive use of phones and their cameras is driving the research around augmented reality technologies that can be used in a browser. Indeed, this could allow to turn every physical support into an access to digital information. A family of specific objects used for such scenario is the printed material. The applications augmenting printed material with additional content such as videos, 3d animations, sound, etc. follow the same scenario: the printed material is filmed by the camera phone and the captured image is sent to a server able to run image recognition algorithms in order to retrieve a similar image in a database. Several technological building-blocks are composing the pipeline including image segmentation (usually done on the phone to extract only the pixels corresponding to the printed material) and image recognition (usually performed on the server). New methods and tools are proposed every year to address them, however, there is still a lack of a common database to benchmark these new methods. In this paper, we propose such database that we make publicly available. url : https://github.com/Ttibo/A-new-database-for-image-retrieval-of-camera-filmed-printed-documents","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123588539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of simplified 3D CAD data for conveying industrial assembly instructions via Augmented reality 利用增强现实技术对工业装配指令传递的简化三维CAD数据进行评估
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564568
Abhayadhathri Arige, T. Lavric, Marius Preda, T. Zaharia
{"title":"Evaluation of simplified 3D CAD data for conveying industrial assembly instructions via Augmented reality","authors":"Abhayadhathri Arige, T. Lavric, Marius Preda, T. Zaharia","doi":"10.1145/3564533.3564568","DOIUrl":"https://doi.org/10.1145/3564533.3564568","url":null,"abstract":"Augmented Reality (AR) based training is gaining momentum in industrial sectors, particularly in assembly and maintenance. Generally, the media contents used to create AR assembly instructions include audio, video, images, text, signs, 3D data, and animations. Literature suggests that 3D CAD-based AR instructions spatially registered with the real-world environment are more effective and produce better training results. However, storing, processing, and rendering 3D data can be challenging even for state-of-the-art AR devices like HoloLens2, particularly in industrial usage. To overcome these concerns, heavy 3D models can be simplified to a certain extent with a minimal impact on the user experience, that is, the quality of visualization in AR. In the present paper, we evaluate the usability of a set of simplified 3D CAD models used to convey manual assembly information to novice operators. The experiment included 14 participants, six assembly operations, and two sets of 3D CAD models (i.e., originals and simplified) and was conducted in a laboratory setting. To simulate as much as possible a real-world assembly scenario, the components, and the original corresponding 3D CAD models were obtained from a real-world industrial setup. The present paper confirms that simplified 3D CAD models can replace the original 3D CAD models within AR applications without affecting the user experience with the help of subjective evaluations.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116712145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Classification in web3D model geometries: Using X3D models for Machine Learning Classification in Real-Time web applications web3D模型几何中的深度学习分类:在实时web应用程序中使用X3D模型进行机器学习分类
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564564
Chrysoula Tzermia, Nick-Periklis Chourdas, A. Malamos
{"title":"Deep Learning Classification in web3D model geometries: Using X3D models for Machine Learning Classification in Real-Time web applications","authors":"Chrysoula Tzermia, Nick-Periklis Chourdas, A. Malamos","doi":"10.1145/3564533.3564564","DOIUrl":"https://doi.org/10.1145/3564533.3564564","url":null,"abstract":"In this paper we study about the requirements of web3D models and particular X3D formatted models in order to work efficiently with Deep Learning algorithms. The reason we are focusing in this particular type of 3D models is that we consider web3D as part of the future in computer graphics. The introduction of metaverse™ technology, indeed confirms that lightweight interoperable 3D models will be an essential part of many novel services we will see in the near future. Furthermore, X3D language is expressing 3D information in a way semantically friendly and so very useful for future applications. In our research we conclude that the lightweight X3D models require some vertices enhancement in order to cooperate with Deep Learning algorithms, however we suggest algorithms that may be applied and make the whole process in Real-Time which is very important in case of web applications.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126842746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InstantXR: Instant XR Environment on the Web Using Hybrid Rendering of Cloud-based NeRF with 3D Assets InstantXR:在Web上使用基于云的NeRF与3D资产的混合渲染的即时XR环境
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564565
Moonsik Park, Byounghyun Yoo, Jee Young Moon, Ji-Hyun Seo
{"title":"InstantXR: Instant XR Environment on the Web Using Hybrid Rendering of Cloud-based NeRF with 3D Assets","authors":"Moonsik Park, Byounghyun Yoo, Jee Young Moon, Ji-Hyun Seo","doi":"10.1145/3564533.3564565","DOIUrl":"https://doi.org/10.1145/3564533.3564565","url":null,"abstract":"For an XR environment to be used on a real-life task, it is crucial all the contents are created and delivered when we want, where we want, and most importantly, on time. To deliver an XR environment faster and correctly, the time spent on modeling should be considerably reduced or eliminated. In this paper, we propose a hybrid method that fuses the conventional method of rendering 3D assets with the Neural Radiance Fields (NeRF) technology, which uses photographs to create and display an instantly generated XR environment in real-time, without a modeling process. While NeRF can generate a relatively realistic space without human supervision, it has disadvantages owing to its high computational complexity. We propose a cloud-based distributed acceleration architecture to reduce computational latency. Furthermore, we implemented an XR streaming structure that can process the input from an XR device in real-time. Consequently, our proposed hybrid method for real-time XR generation using NeRF and 3D graphics is available for lightweight mobile XR clients, such as untethered HMDs. The proposed technology makes it possible to quickly virtualize one location and deliver it to another remote location, thus making virtual sightseeing and remote collaboration more accessible to the public. The implementation of our proposed architecture along with the demo video is available at https://moonsikpark.github.io/instantxr/.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132077914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Levels of Representation and Data Infrastructures in Entomo-3D: An applied research approach to addressing metadata curation issues to support preservation and access of 3D data Entomo-3D中的表示层次和数据基础设施:一种解决元数据管理问题的应用研究方法,以支持3D数据的保存和访问
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564573
Wen Nie Ng, Alex Kinnaman, N. Hall
{"title":"Levels of Representation and Data Infrastructures in Entomo-3D: An applied research approach to addressing metadata curation issues to support preservation and access of 3D data","authors":"Wen Nie Ng, Alex Kinnaman, N. Hall","doi":"10.1145/3564533.3564573","DOIUrl":"https://doi.org/10.1145/3564533.3564573","url":null,"abstract":"This paper employs an action-based research approach to address the question of how to create a sustainable workflow to support long term access, interoperability, and reuse of 3D data. This is applied research, stemming from the Entomo-3D collaboration between Virginia Tech University Libraries and the Virginia Tech Department of Entomology to digitize a university insect pollinator collection. The paper will describe infrastructure to support data management and transformation, as well as new challenges that have emerged from this effort.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129991152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defining the Metaverse through the lens of academic scholarship, news articles, and social media 通过学术研究、新闻文章和社交媒体来定义虚拟世界
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564571
Nathan David Green, Karen Works
{"title":"Defining the Metaverse through the lens of academic scholarship, news articles, and social media","authors":"Nathan David Green, Karen Works","doi":"10.1145/3564533.3564571","DOIUrl":"https://doi.org/10.1145/3564533.3564571","url":null,"abstract":"The emergence of the Metaverse has received varied attention from academic scholars, news media, and social media. While the term ’Metaverse’ has been around since Neil Stevenson’s 1992 novel, Snow Crash, the definition of the Metaverse changes depending on who and when it is described. In this study we analyze various works about the Metaverse from ACM publications on the topic, to news media reports, to discussions on social media. Using topic modeling techniques and natural language analysis, we show how each community speaks about and defines the Metaverse.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128931453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Designing for Social Interactions in a Virtual Art Gallery 在虚拟美术馆中设计社交互动
Proceedings of the 27th International Conference on 3D Web Technology Pub Date : 2022-11-02 DOI: 10.1145/3564533.3564562
Nicholas F. Polys, Samridhi Roshan, Emily Newton, Muskaan Narula, Bao T. Thai
{"title":"Designing for Social Interactions in a Virtual Art Gallery","authors":"Nicholas F. Polys, Samridhi Roshan, Emily Newton, Muskaan Narula, Bao T. Thai","doi":"10.1145/3564533.3564562","DOIUrl":"https://doi.org/10.1145/3564533.3564562","url":null,"abstract":"The dawn of a new digital world has emerged with new ways to communicate and collaborate with other people across the globe. Metaverses and Mirror Worlds have broadened our perspectives on the ways we can utilize 3D virtual environments. A Mirror World is a 3D virtual space that depicts a real-life place or environment that people may want to see physically or would like to manipulate to create something new. A perfect example of this would be an art gallery which provides people an outlet to express themselves through various art forms and be able to socialize and have that human interaction that is needed during times when physical presence may be difficult. This project strives to improve user social interactions and make spatial control easier and more fluid in a virtual art gallery, while also incorporating the existing metaphor of permission and user privileges used in synchronous collaborative environments. We worked to create ways for people to be invited into group chats based on proximity, allowing users to give their consent as to who they want to talk to and who they will allow to share control within the space. We also implemented a way to view the space as a 3D map that highlights pieces of artwork around the space for people to teleport to and view at ease. To demonstrate this shared viewing and navigation experience we also focused on incorporating audio and spatial interaction features within the art gallery prototype of X3D and glTF models, images and audio, and HTML user interface.","PeriodicalId":277384,"journal":{"name":"Proceedings of the 27th International Conference on 3D Web Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116647830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信