IEEE Transactions on Visualization and Computer Graphics最新文献

筛选
英文 中文
Adaptive Sampling for Sound Propagation. 声音传播的自适应采样。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898765
Chakravarty R Alla Chaitanya, John M Snyder, Keith Godin, Derek Nowrouzezahrai, Nikunj Raghuvanshi
{"title":"Adaptive Sampling for Sound Propagation.","authors":"Chakravarty R Alla Chaitanya,&nbsp;John M Snyder,&nbsp;Keith Godin,&nbsp;Derek Nowrouzezahrai,&nbsp;Nikunj Raghuvanshi","doi":"10.1109/TVCG.2019.2898765","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898765","url":null,"abstract":"<p><p>Precomputed sound propagation samples acoustics at discrete scene probe positions to support dynamic listener locations. An offline 3D numerical simulation is performed at each probe and the resulting field is encoded for runtime rendering with dynamic sources. Prior work place probes on a uniform grid, requiring high density to resolve narrow spaces. Our adaptive sampling approach varies probe density based on a novel \"local diameter\" measure of the space surrounding a given point, evaluated by stochastically tracing paths in the scene. We apply this measure to layout probes so as to smoothly adapt resolution and eliminate undersampling in corners, narrow corridors and stairways, while coarsening appropriately in more open areas. Coupled with a new runtime interpolator based on radial weights over geodesic paths, we achieve smooth acoustic effects that respect scene boundaries as both the source or listener move, unlike existing visibility-based solutions. We consistently demonstrate quality improvement over prior work at fixed cost.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1846-1854"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898765","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40538301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
You or Me? Personality Traits Predict Sacrificial Decisions in an Accident Situation. 你还是我?性格特征预测意外情况下的牺牲决定。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-25 DOI: 10.1109/TVCG.2019.2899227
Ju Uijong, June Kang, Christian Wallraven
{"title":"You or Me? Personality Traits Predict Sacrificial Decisions in an Accident Situation.","authors":"Ju Uijong,&nbsp;June Kang,&nbsp;Christian Wallraven","doi":"10.1109/TVCG.2019.2899227","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2899227","url":null,"abstract":"<p><p>Emergency situations during car driving sometimes force the driver to make a sudden decision. Predicting these decisions will have important applications in updating risk analyses in insurance applications, but also can give insights for drafting autonomous vehicle guidelines. Studying such behavior in experimental settings, however, is limited by ethical issues as it would endanger peoples' lives. Here, we employed the potential of virtual reality (VR) to investigate decision-making in an extreme situation in which participants would have to sacrifice others in order to save themselves. In a VR driving simulation, participants first trained to complete a difficult course with multiple crossroads in which the wrong turn would lead the car to fall down a cliff. In the testing phase, obstacles suddenly appeared on the \"safe\" turn of a crossroad: for the control group, obstacles consisted of trees, whereas for the experimental group, they were pedestrians. In both groups, drivers had to decide between falling down the cliff or colliding with the obstacles. Results showed that differences in personality traits were able to predict this decision: in the experimental group, drivers who collided with the pedestrians had significantly higher psychopathy and impulsivity traits, whereas impulsivity alone was to some degree predictive in the control group. Other factors like heart rate differences, gender, video game expertise, and driving experience were not predictive of the emergency decision in either group. Our results show that self-interest related personality traits affect decision-making when choosing between preservation of self or others in extreme situations and showcase the potential of virtual reality in studying and modeling human decision-making.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1898-1907"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2899227","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36997351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Functional Workspace Optimization via Learning Personal Preferences from Virtual Experiences. 从虚拟体验中学习个人偏好的功能工作空间优化。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898721
Wei Liang, Jingjing Liu, Yining Lang, Bing Ning, Lap-Fai Yu
{"title":"Functional Workspace Optimization via Learning Personal Preferences from Virtual Experiences.","authors":"Wei Liang,&nbsp;Jingjing Liu,&nbsp;Yining Lang,&nbsp;Bing Ning,&nbsp;Lap-Fai Yu","doi":"10.1109/TVCG.2019.2898721","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898721","url":null,"abstract":"<p><p>The functionality of a workspace is one of the most important considerations in both virtual world design and interior design. To offer appropriate functionality to the user, designers usually take some general rules into account, e.g., general workflow and average stature of users, which are summarized from the population statistics. Yet, such general rules cannot reflect the personal preferences of a single individual, which vary from person to person. In this paper, we intend to optimize a functional workspace according to the personal preferences of the specific individual who will use it. We come up with an approach to learn the individual's personal preferences from his activities while using a virtual version of the workspace via virtual reality devices. Then, we construct a cost function, which incorporates personal preferences, spatial constraints, pose assessments, and visual field. At last, the cost function is optimized to achieve an optimal layout. To evaluate the approach, we experimented with different settings. The results of the user study show that the workspaces updated in this way better fit the users.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1836-1845"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40447506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Modulating Fine Roughness Perception of Vibrotactile Textured Surface using Pseudo-haptic Effect. 利用伪触觉效应调制振动触觉纹理表面的精细粗糙度感知。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898820
Yusuke Ujitoko, Yuki Ban, Koichi Hirota
{"title":"Modulating Fine Roughness Perception of Vibrotactile Textured Surface using Pseudo-haptic Effect.","authors":"Yusuke Ujitoko,&nbsp;Yuki Ban,&nbsp;Koichi Hirota","doi":"10.1109/TVCG.2019.2898820","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898820","url":null,"abstract":"<p><p>Playing back vibrotactile signals through actuators is commonly used to simulate tactile feelings of virtual textured surfaces. However, there is often a small mismatch between the simulated tactile feelings and intended tactile feelings by tactile designers. Thus, a method of modulating the vibrotactile perception is required. We focus on fine roughness perception and we propose a method using a pseudo-haptic effect to modulate fine roughness perception of vibrotactile texture. Specifically, we visually modify the pointer's position on the screen slightly, which indicates the touch position on textured surfaces. We hypothesized that if users receive vibrational feedback watching the pointer visually oscillating back/forth and left/right, users would believe the vibrotactile surfaces more uneven. We also hypothesized that as the size of visual oscillation is getting larger, the amount of modification of roughness perception of vibrotactile surfaces would be larger. We conducted user studies to test the hypotheses. Results of first user study suggested that users felt vibrotactile texture with our method rougher than they did without our method at a high probability. Results of second user study suggested that users felt different roughness for vibrational texture in response to the size of visual oscillation. These results confirmed our hypotheses and they suggested that our method was effective. Also, the same effect could potentially be applied to the visual movement of virtual hands or fingertips when users are interacting with virtual surfaces using their hands.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1981-1990"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898820","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40547670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Manufacturing Application-Driven Foveated Near-Eye Displays. 制造应用驱动的注视点近眼显示器。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-14 DOI: 10.1109/TVCG.2019.2898781
Kaan Aksit, Praneeth Chakravarthula, Kishore Rathinavel, Youngmo Jeong, Rachel Albert, Henry Fuchs, David Luebke
{"title":"Manufacturing Application-Driven Foveated Near-Eye Displays.","authors":"Kaan Aksit,&nbsp;Praneeth Chakravarthula,&nbsp;Kishore Rathinavel,&nbsp;Youngmo Jeong,&nbsp;Rachel Albert,&nbsp;Henry Fuchs,&nbsp;David Luebke","doi":"10.1109/TVCG.2019.2898781","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898781","url":null,"abstract":"<p><p>Traditional optical manufacturing poses a great challenge to near-eye display designers due to large lead times in the order of multiple weeks, limiting the abilities of optical designers to iterate fast and explore beyond conventional designs. We present a complete near-eye display manufacturing pipeline with a day lead time using commodity hardware. Our novel manufacturing pipeline consists of several innovations including a rapid production technique to improve surface of a 3D printed component to optical quality suitable for near-eye display application, a computational design methodology using machine learning and ray tracing to create freeform static projection screen surfaces for near-eye displays that can represent arbitrary focal surfaces, and a custom projection lens design that distributes pixels non-uniformly for a foveated near-eye display hardware design candidate. We have demonstrated untethered augmented reality near-eye display prototypes to assess success of our technique, and show that a ski-goggles form factor, a large monocular field of view (30<sup>o</sup>×55<sup>o</sup>), and a resolution of 12 cycles per degree can be achieved.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1928-1939"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898781","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37150908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
A Perception-driven Hybrid Decomposition for Multi-layer Accommodative Displays. 多层可调节显示的感知驱动混合分解。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-18 DOI: 10.1109/TVCG.2019.2898821
Hyeonseung Yu, Mojtaba Bemana, Marek Wernikowski, Michal Chwesiuk, Okan Tarhan Tursun, Gurprit Singh, Karol Myszkowski, Radoslaw Mantiuk, Hans-Peter Seidel, Piotr Didyk
{"title":"A Perception-driven Hybrid Decomposition for Multi-layer Accommodative Displays.","authors":"Hyeonseung Yu,&nbsp;Mojtaba Bemana,&nbsp;Marek Wernikowski,&nbsp;Michal Chwesiuk,&nbsp;Okan Tarhan Tursun,&nbsp;Gurprit Singh,&nbsp;Karol Myszkowski,&nbsp;Radoslaw Mantiuk,&nbsp;Hans-Peter Seidel,&nbsp;Piotr Didyk","doi":"10.1109/TVCG.2019.2898821","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898821","url":null,"abstract":"<p><p>Multi-focal plane and multi-layered light-field displays are promising solutions for addressing all visual cues observed in the real world. Unfortunately, these devices usually require expensive optimizations to compute a suitable decomposition of the input light field or focal stack to drive individual display layers. Although these methods provide near-correct image reconstruction, a significant computational cost prevents real-time applications. A simple alternative is a linear blending strategy which decomposes a single 2D image using depth information. This method provides real-time performance, but it generates inaccurate results at occlusion boundaries and on glossy surfaces. This paper proposes a perception-based hybrid decomposition technique which combines the advantages of the above strategies and achieves both real-time performance and high-fidelity results. The fundamental idea is to apply expensive optimizations only in regions where it is perceptually superior, e.g., depth discontinuities at the fovea, and fall back to less costly linear blending otherwise. We present a complete, perception-informed analysis and model that locally determine which of the two strategies should be applied. The prediction is later utilized by our new synthesis method which performs the image decomposition. The results are analyzed and validated in user experiments on a custom multi-plane display.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1940-1950"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898821","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37150909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
MegaParallax: Casual 360° Panoramas with Motion Parallax. MegaParallax:随意360°全景与运动视差。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-25 DOI: 10.1109/TVCG.2019.2898799
Tobias Bertel, Neill D F Campbell, Christian Richardt
{"title":"MegaParallax: Casual 360° Panoramas with Motion Parallax.","authors":"Tobias Bertel,&nbsp;Neill D F Campbell,&nbsp;Christian Richardt","doi":"10.1109/TVCG.2019.2898799","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898799","url":null,"abstract":"<p><p>The ubiquity of smart mobile devices, such as phones and tablets, enables users to casually capture 360° panoramas with a single camera sweep to share and relive experiences. However, panoramas lack motion parallax as they do not provide different views for different viewpoints. The motion parallax induced by translational head motion is a crucial depth cue in daily life. Alternatives, such as omnidirectional stereo panoramas, provide different views for each eye (binocular disparity), but they also lack motion parallax as the left and right eye panoramas are stitched statically. Methods based on explicit scene geometry reconstruct textured 3D geometry, which provides motion parallax, but suffers from visible reconstruction artefacts. The core of our method is a novel multi-perspective panorama representation, which can be casually captured and rendered with motion parallax for each eye on the fly. This provides a more realistic perception of panoramic environments which is particularly useful for virtual reality applications. Our approach uses a single consumer video camera to acquire 200-400 views of a real 360° environment with a single sweep. By using novel-view synthesis with flow-based blending, we show how to turn these input views into an enriched 360° panoramic experience that can be explored in real time, without relying on potentially unreliable reconstruction of scene geometry. We compare our results with existing omnidirectional stereo and image-based rendering methods to demonstrate the benefit of our approach, which is the first to enable casual consumers to capture and view high-quality 360° panoramas with motion parallax.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1828-1835"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898799","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36997350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Varifocal Occlusion for Optical See-Through Head-Mounted Displays using a Slide Occlusion Mask. 使用滑动遮挡遮罩的光学透明头戴式显示器的变焦遮挡。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 DOI: 10.1109/TVCG.2019.2899249
Takumi Hamasaki, Yuta Itoh
{"title":"Varifocal Occlusion for Optical See-Through Head-Mounted Displays using a Slide Occlusion Mask.","authors":"Takumi Hamasaki,&nbsp;Yuta Itoh","doi":"10.1109/TVCG.2019.2899249","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2899249","url":null,"abstract":"<p><p>We propose a varifocal occlusion technique for optical see-through head-mounted displays (OST-HMDs). Occlusion in OST-HMDs is a powerful visual cue that enables depth perception in augmented reality (AR). Without occlusion, virtual objects rendered by an OST-HMD appear semi-transparent and less realistic. A common occlusion technique is to use spatial light modulators (SLMs) to block incoming light rays at each pixel on the SLM selectively. However, most of the existing methods create an occlusion mask only at a single, fixed depth-typically at infinity. With recent advances in varifocal OST-HMDs, such traditional fixed-focus occlusion causes a mismatch in depth between the occlusion mask plane and the virtual object to be occluded, leading to an uncomfortable user experience with blurred occlusion masks. In this paper, we thus propose an OST-HMD system with varifocal occlusion capability: we physically slide a transmissive liquid crystal display (LCD) to optically shift the occlusion plane along the optical path so that the mask appears sharp and aligns to a virtual image at a given depth. Our solution has several benefits over existing varifocal occlusion methods: it is computationally less demanding and, more importantly, it is optically consistent, i.e., when a user loses focus on the corresponding virtual image, the mask again gets blurred consistently as the virtual image does. In the experiment, we build a proof-of-concept varifocal occlusion system implemented with a custom retinal projection display and demonstrate that the system can shift the occlusion plane to depths ranging from 25 cm to infinity.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1961-1969"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2899249","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37295094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Motion parallax for 360° RGBD video. 360°RGBD视频的运动视差。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-03-06 DOI: 10.1109/TVCG.2019.2898757
Ana Serrano, Incheol Kim, Zhili Chen, Stephen DiVerdi, Diego Gutierrez, Aaron Hertzmann, Belen Masia
{"title":"Motion parallax for 360° RGBD video.","authors":"Ana Serrano,&nbsp;Incheol Kim,&nbsp;Zhili Chen,&nbsp;Stephen DiVerdi,&nbsp;Diego Gutierrez,&nbsp;Aaron Hertzmann,&nbsp;Belen Masia","doi":"10.1109/TVCG.2019.2898757","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898757","url":null,"abstract":"<p><p>We present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today's most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1817-1827"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37032478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements. 虚拟卡尺:从3D测量快速创建度量精确的化身。
IF 5.2 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2019-05-01 Epub Date: 2019-02-21 DOI: 10.1109/TVCG.2019.2898748
Sergi Pujades, Betty Mohler, Anne Thaler, Joachim Tesch, Naureen Mahmood, Nikolas Hesse, Heinrich H Bulthoff, Michael J Black
{"title":"The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements.","authors":"Sergi Pujades,&nbsp;Betty Mohler,&nbsp;Anne Thaler,&nbsp;Joachim Tesch,&nbsp;Naureen Mahmood,&nbsp;Nikolas Hesse,&nbsp;Heinrich H Bulthoff,&nbsp;Michael J Black","doi":"10.1109/TVCG.2019.2898748","DOIUrl":"https://doi.org/10.1109/TVCG.2019.2898748","url":null,"abstract":"<p><p>Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating \"The Virtual Caliper\", which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"25 5","pages":"1887-1897"},"PeriodicalIF":5.2,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898748","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36990225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信