Maria C. R. Harrington, Chris Jones, Crissy Peters
{"title":"Course on virtual nature as a digital twin: botanically correct 3D AR and VR optimized low-polygon and photogrammetry high-polygon plant models","authors":"Maria C. R. Harrington, Chris Jones, Crissy Peters","doi":"10.1145/3532720.3535663","DOIUrl":"https://doi.org/10.1145/3532720.3535663","url":null,"abstract":"The student of this course should already know how to use 3D modeling software to create FBX files. This Course expands on a short overview presented in the Educators' Forum. It is different in several ways. First presented is the overview of the context and justification of why botanically accurate plants and landscapes are important for educational applications, such as for use in museums, arboretums, and field trip experiences to botanical gardens. Connected to that goal is the importance of accuracy in visualization of not only the plant, but the entire virtual model of the landscape using plant inventory data and plant population density geographical information system (GIS) data. This then touches on the important issues for digital twins, as models and simulations of reality. Educational applications are different than entertainment application in the dimensions of information fidelity, or the trustworthiness of the presentation, and also the graphical fidelity, or the photorealistic capacity of the rendering systems. These are not always the same. High graphical fidelity is a byproduct of high information fidelity; the reverse is not always true. High graphical fidelity enhances information fidelity, if used for that purpose. Two immersive informal learning applications use cases are presented, one for augmented reality (AR), and the other as a virtual reality (VR) example. Both models used the same design and development process, integrating domain expertise, the botanist and the ecologist, with the art and software team to enhance the accuracy. Co-design, highly iterative review process, removes errors in educational content, and representation, as well as usability and may be generalized to any domain when learning is a goal of a digital twin. In this work it is referred to as the Expert-Learner-User-Experience (ELUX) design process. Game engines, as general purpose visualization tools, make multimodal and interaction possible to enhance user experiences and to make semantic material accessible to the learner. The technical constrains on the application design demanded two production pipelines. The AR and VR pipeline required low-polygon models for performance, and the newly released Unreal Engine 5 and Reality Capture created an opportunity to increase the graphical fidelity and the information fidelity of the plants and models. Virtual nature construction methods are covered in two processes, first with low-polygon 3D plant models ideal for AR and VR, and the second with high-polygon 3D plant models using Unreal Engine 5 and Reality Capture. As highly accurate 3D plant models, combined with stat of the art rendering for photorealistic models, when combined with GIS geospatial datasets, and visualized in immersive devices, digital twins of the natural world become possible. Once these models are connected to mathematical models of the natural world, dynamics driven by real time data feeds, and forecasts, both back in time and forward ","PeriodicalId":233541,"journal":{"name":"ACM SIGGRAPH 2022 Courses","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115258063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advances in real-time rendering in games: part II","authors":"Natalya Tatarchuk, A. Schneider","doi":"10.1145/3532720.3546903","DOIUrl":"https://doi.org/10.1145/3532720.3546903","url":null,"abstract":"","PeriodicalId":233541,"journal":{"name":"ACM SIGGRAPH 2022 Courses","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121210070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Custom landmarkers: building location based AR with lens studio","authors":"Hammad Bashir, Chris Reilly, Callie Holderman","doi":"10.1145/3532720.3535644","DOIUrl":"https://doi.org/10.1145/3532720.3535644","url":null,"abstract":"Today we're excited to walk you through creating location based Augmented Reality (AR) content with Snap's Lens Studio - Snaps augmented reality content authoring tool. We'll discuss what Location Based AR is, the design challenges it presents and demonstrate how you can use Lens Studio to address and solve many of these challenges.","PeriodicalId":233541,"journal":{"name":"ACM SIGGRAPH 2022 Courses","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132961766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contact and friction simulation for computer graphics","authors":"S. Andrews, Kenny Erleben, Z. Ferguson","doi":"10.1145/3532720.3535640","DOIUrl":"https://doi.org/10.1145/3532720.3535640","url":null,"abstract":"Efficient simulation of contact is of interest for numerous physics-based animation applications. For instance, virtual reality training, video games, rapid digital prototyping, and robotics simulation are all examples of applications that involve contact modeling and simulation. However, despite its extensive use in modern computer graphics, contact simulation remains one of the most challenging problems in physics-based animation. This course covers fundamental topics on the nature of contact modeling and simulation for computer graphics. Specifically, we provide mathematical details about formulating contact as a complementarity problem in rigid body and soft body animations. We briefly cover several approaches for contact generation using discrete collision detection. Then, we present a range of numerical techniques for solving the associated LCPs and NCPs. The advantages and disadvantages of each technique are further discussed in a practical manner, and best practices for implementation are discussed. Finally, we conclude the course with several advanced topics such as methods for soft body contact problems, barrier functions, and anisotropic friction modeling. Programming examples are provided in our appendix as well as on the course website to accompany the course notes.","PeriodicalId":233541,"journal":{"name":"ACM SIGGRAPH 2022 Courses","volume":"55 16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124909602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalya Tatarchuk, F. Ciardi, Lasse Jon Fuglsang Pedersen, John Parsaie, Feng Xie, Hugh Malan
{"title":"Advances in real-time rendering in games: part III","authors":"Natalya Tatarchuk, F. Ciardi, Lasse Jon Fuglsang Pedersen, John Parsaie, Feng Xie, Hugh Malan","doi":"10.1145/3532720.3546905","DOIUrl":"https://doi.org/10.1145/3532720.3546905","url":null,"abstract":"","PeriodicalId":233541,"journal":{"name":"ACM SIGGRAPH 2022 Courses","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121562259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalya Tatarchuk, J. Dupuy, T. Deliot, Daniel Wright, Krzysztof Narkowicz, Patrick Kelly, Aleksander Netzel, Tiago Costa
{"title":"Advances in real-time rendering in games: part I","authors":"Natalya Tatarchuk, J. Dupuy, T. Deliot, Daniel Wright, Krzysztof Narkowicz, Patrick Kelly, Aleksander Netzel, Tiago Costa","doi":"10.1145/3532720.3546895","DOIUrl":"https://doi.org/10.1145/3532720.3546895","url":null,"abstract":"","PeriodicalId":233541,"journal":{"name":"ACM SIGGRAPH 2022 Courses","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124305725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Cozzi, Marc Petit, Neil Trevett, N. Alameh, Morgan McGuire, Guido Quaroni
{"title":"Building the open metaverse: part II","authors":"Patrick Cozzi, Marc Petit, Neil Trevett, N. Alameh, Morgan McGuire, Guido Quaroni","doi":"10.1145/3532720.3535668","DOIUrl":"https://doi.org/10.1145/3532720.3535668","url":null,"abstract":"","PeriodicalId":233541,"journal":{"name":"ACM SIGGRAPH 2022 Courses","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132689603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koray Kavaklı, David R. Walton, N. Antipa, Rafał K. Mantiuk, Douglas Lanman, K. Akşit
{"title":"Optimizing vision and visuals: lectures on cameras, displays and perception","authors":"Koray Kavaklı, David R. Walton, N. Antipa, Rafał K. Mantiuk, Douglas Lanman, K. Akşit","doi":"10.1145/3532720.3535650","DOIUrl":"https://doi.org/10.1145/3532720.3535650","url":null,"abstract":"The evolution of the internet is underway, where immersive virtual 3D environments (commonly known as metaverse or telelife) will replace flat 2D interfaces. Crucial ingredients in this transformation are next-generation displays and cameras representing genuinely 3D visuals while meeting the human visual system's perceptual requirements. This course will provide a fast-paced introduction to optimization methods for next-generation interfaces geared towards immersive virtual 3D environments. Firstly, we will introduce lensless cameras for high dimensional compressive sensing (e.g., single exposure capture to a video or one-shot 3D). Our audience will learn to process images from a lensless camera at the end. Secondly, we introduce holographic displays as a potential candidate for next-generation displays. By the end of this course, you will learn to create your 3D images that can be viewed using a standard holographic display. Lastly, we will introduce perceptual guidance that could be an integral part of the optimization routines of displays and cameras. Our audience will gather experience in integrating perception to display and camera optimizations. This course targets a wide range of audiences, from domain experts to newcomers. To do so, examples from this course will be based on our in-house toolkit to be replicable for future use. The course material will provide example codes and a broad survey with crucial information on cameras, displays and perception.","PeriodicalId":233541,"journal":{"name":"ACM SIGGRAPH 2022 Courses","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128825638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}