{"title":"In Situ Segmentation of Turbulent Flow with Topology Data Analysis","authors":"F. Nauleau, B. Fovet, F. Vivodtzev","doi":"10.1145/3532719.3543257","DOIUrl":"https://doi.org/10.1145/3532719.3543257","url":null,"abstract":"","PeriodicalId":289790,"journal":{"name":"ACM SIGGRAPH 2022 Posters","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124823452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Achref Doula, T. Güdelhöfer, Andrii Mativiienko, M. Mühlhäuser, Alejandro Sanchez Guinea
{"title":"Immersive-Labeler: Immersive Annotation of Large-Scale 3D Point Clouds in Virtual Reality","authors":"Achref Doula, T. Güdelhöfer, Andrii Mativiienko, M. Mühlhäuser, Alejandro Sanchez Guinea","doi":"10.1145/3532719.3543249","DOIUrl":"https://doi.org/10.1145/3532719.3543249","url":null,"abstract":"We present Immersive-Labeler, an environment for the annotation of large-scale 3D point cloud scenes of urban environments. Our concept is based on the full immersion of the user in a VR-based environment that represents the 3D point cloud scene while offering adapted visual aids and intuitive interaction and navigation modalities. Through a user-centric design, we aim to improve the annotation experience and thus reduce its costs. For the preliminary evaluation of our environment, we conduct a user study (N=20) to quantify the effect of higher levels of immersion in combination with the visual aids we implemented on the annotation process. Our findings reveal that higher levels of immersion combined with object-based visual aids lead to a faster and more engaging annotation process.","PeriodicalId":289790,"journal":{"name":"ACM SIGGRAPH 2022 Posters","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125436261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Denoising and Guided Upsampling of Monte Carlo Path Traced Low Resolution Renderings","authors":"K. Alpay, A. Akyüz","doi":"10.1145/3532719.3543250","DOIUrl":"https://doi.org/10.1145/3532719.3543250","url":null,"abstract":"Monte Carlo path tracing generates renderings by estimating the rendering equation using the Monte Carlo method. Studies focus on rendering a noisy image at the original resolution with a low sample per pixel count to decrease the rendering time. Image-space denoising is then applied to produce a visually appealing output. However, denoising process cannot handle the high variance of the noisy image accurately if the sample count is reduced harshly to finish the rendering in a shorter time. We propose a framework that renders the image at a reduced resolution to cast more samples than the harshly lowered sample count in the same time budget. The image is then robustly denoised, and the denoised result is upsampled using original resolution G-buffer of the scene as guidance.","PeriodicalId":289790,"journal":{"name":"ACM SIGGRAPH 2022 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115075547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Determining the Orientation of Low Resolution Images of a De-Bruijn Tracking Pattern with a CNN","authors":"Andreas Schmid, Raphael Wimmer, S. Lippl","doi":"10.1145/3532719.3543259","DOIUrl":"https://doi.org/10.1145/3532719.3543259","url":null,"abstract":"Inside-out optical 2D tracking of tangible objects on a surface oftentimes uses a high-resolution pattern printed on the surface. While De-Bruijn-torus patterns offer maximum information density, their orientation must be known to decode them. Determining the orientation is challenging for patterns with very fine details; traditional algorithms, such as Hough Lines, do not work reliably. We show that a convolutional neural network can reliably determine the orientation of quasi-random bitmaps with 6 × 6 pixels per block within 36 × 36 pixel images taken by a mouse sensor. Mean error rate is below 2°. Furthermore, our model outperformed Hough Lines in a test with arbitrarily rotated low-resolution rectangles. This implies that CNN-based rotation-detection might also be applicable for more general use cases.","PeriodicalId":289790,"journal":{"name":"ACM SIGGRAPH 2022 Posters","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121585814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A GPU-Accelerated Hydrodynamics Solver For Atmosphere-Fire Interactions","authors":"Jhamieka Greenwood, B. Quaife, K. Speer","doi":"10.1145/3532719.3543263","DOIUrl":"https://doi.org/10.1145/3532719.3543263","url":null,"abstract":"A fundamental process to understand fire spread is the atmospheric flow. Building computational tools to simulate this complex flow has several challenges including boundary layer effects, resolving vegetation and the forest canopies, conserving fluid mass, and incorporating fire-induced flows. We develop a two-dimensional hydrodynamic solver that models fire-induced flow as a convective sink that converts the two-dimensional horizontal flow into a vertical flow through the buoyant plume. The resulting equations are the two-dimensional Navier-Stokes equations, but with point source delta functions appearing in the conservation of mass equation. We develop a projection method to solve these equations and implement them on a GPU architecture. The ultimate goalis to simulate wildfire spread faster than real-time, and with the ability for users to introduce real-time updates in an augmented reality sandbox.","PeriodicalId":289790,"journal":{"name":"ACM SIGGRAPH 2022 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128957303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GravityPack: Exploring a Wearable Gravity Display for Immersive Interaction Using Liquid-based System","authors":"Yu-Yen Chen, Yiran Lu, Ping-Hsuan Han","doi":"10.1145/3532719.3543218","DOIUrl":"https://doi.org/10.1145/3532719.3543218","url":null,"abstract":"Previous works have done several kinds of haptic techniques for simulating the touching experience of the virtual object. However, the feedback on the object’s weight has less been explored. This paper presents GravityPack, a wearable gravity display to simulate grabbing, holding, and releasing the virtual object in the virtual world using the liquid-based system consisting of pumps, pipes, valves, a water tank, and water packs. This system can provide a wide weight range from 110g to 1.8 kg in 40 seconds. Additionally, we design and investigate the visual feedback of weight transition for the delay time of liquid transfer to understand the feasibility of visualization by a user study. With the revealing of design consideration and implementation, the paper also shows the potential use of the liquid-based system and its possibility of the visualization technique to simulate the weight sensations.","PeriodicalId":289790,"journal":{"name":"ACM SIGGRAPH 2022 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133654291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ProjecString: Turning an Everyday String Curtain into an Interactive Projection Display","authors":"Wooje Chang, Yeeun Shin, Yeon Soo Kim, Woohun Lee","doi":"10.1145/3532719.3543203","DOIUrl":"https://doi.org/10.1145/3532719.3543203","url":null,"abstract":"We present ProjecString, a touch-sensitive string curtain projection display that encourages novel interactions via touching, grasping, and seeing and walking through the display. We embed capacitive-sensing conductive chains into an everyday string curtain, turning it into both a space divider and an interactive display. This novel take on transforming an everyday object into an interactive projection surface with a unique translucent property creates novel interactions that are both immersive and isolating.","PeriodicalId":289790,"journal":{"name":"ACM SIGGRAPH 2022 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117108899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of 3D projection mapping from a moving vehicle to observe from inside and outside of the vehicle","authors":"Sora Ahn, S. Mizuno","doi":"10.1145/3532719.3543247","DOIUrl":"https://doi.org/10.1145/3532719.3543247","url":null,"abstract":"","PeriodicalId":289790,"journal":{"name":"ACM SIGGRAPH 2022 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128888132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}