IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
CounterCrime - Using counterfactual explanations to explore crime reduction scenarios. 反犯罪-使用反事实的解释来探索减少犯罪的场景。
IEEE transactions on visualization and computer graphics Pub Date : 2025-07-11 DOI: 10.1109/TVCG.2025.3586202
Marcos M Raimundo, Germain Garcia-Zanabria, Luis Gustavo Nonato, Jorge Poco
{"title":"CounterCrime - Using counterfactual explanations to explore crime reduction scenarios.","authors":"Marcos M Raimundo, Germain Garcia-Zanabria, Luis Gustavo Nonato, Jorge Poco","doi":"10.1109/TVCG.2025.3586202","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3586202","url":null,"abstract":"<p><p>Analyzing the impact of socioeconomic and urban variables on crime is a complex data analysis problem. Exploring synthetic, correlation-based scenarios using changes in a set of variables could alter a region's definition from unsafe to safe (known counterfactual explanation), which can aid decision-makers in interpreting crime in that region and define public policies to mitigate criminal activity. We propose CounterCrime, a visual analytics tool for crime analysis that uses counterfactual explanations to add insights for this problem. This tool employs various interactive visual metaphors to explore the counterfactual explorations generated in each region. To facilitate exploration, we organize our analysis at three levels: the whole city, the region group, and the regional level. This work proposes a new perspective in crime analysis by creating \"what-if\" scenarios and allowing decision makers to anticipate changes that would make a region safer. The tool guides the user in selecting variables with the most significant effect in all city regions. Using a greedy strategy, the system recommends the best variables that may influence crime in unsafe regions as the user explores. Our tool allows for identifying the most appropriate counterfactual explorations at the regional level by grouping them by similarity and determining their feasibility by comparing them with existing examples in other regions. Using crime data from São Paulo, Brazil, we validated our results with case studies. These case studies reveal interesting findings; for example, scenarios that influence crime in a particular unsafe region (or set of regions) might not influence crime in other unsafe regions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144612767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study of Data Augmentation for Learning-Driven Scientific Visualization. 面向学习驱动科学可视化的数据增强研究。
IEEE transactions on visualization and computer graphics Pub Date : 2025-07-10 DOI: 10.1109/TVCG.2025.3587685
Jun Han, Hao Zheng, Jun Tao
{"title":"A Study of Data Augmentation for Learning-Driven Scientific Visualization.","authors":"Jun Han, Hao Zheng, Jun Tao","doi":"10.1109/TVCG.2025.3587685","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3587685","url":null,"abstract":"<p><p>The success of deep learning heavily relies on the large amount of training samples. However, in scientific visualization, due to the high computational cost, only few data are available during training, which limits the performance of deep learning. A common technique to address the data sparsity issue is data augmentation. In this paper, we present a comprehensive study on nine data augmentation techniques (i.e., noise injection, interpolation, scale, flip, rotation, variational auto-encoder, generative adversarial network, diffusion model, and implicit neural representation) for understanding their effectiveness on two scientific visualization tasks, i.e., spatial super-resolution and ambient occlusion prediction. We compare the data quality, rendering fidelity, optimization time, and memory consumption of these data augmentation techniques using several scientific datasets with various characteristics. We investigate the effects of data augmentation on the method, quantity, and diversity for these tasks with various deep learning models. Our study shows that increasing the quantity and single-domain diversity of augmented data can boost model performance, while the method and cross-domain diversity of the augmented data do not have the same impact. Based on our findings, we discuss the opportunities and future directions for scientific data augmentation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Ankle Tendon Electrical Stimulation on Detection Threshold and Applicability of Redirected Walking. 踝关节肌腱电刺激对重定向行走检测阈值及适用性的影响。
IEEE transactions on visualization and computer graphics Pub Date : 2025-07-10 DOI: 10.1109/TVCG.2025.3588032
Takashi Ota, Keigo Matsumoto, Kazuma Aoyama, Tomohiro Amemiya, Takuji Narumi, Hideaki Kuzuoka
{"title":"Effects of Ankle Tendon Electrical Stimulation on Detection Threshold and Applicability of Redirected Walking.","authors":"Takashi Ota, Keigo Matsumoto, Kazuma Aoyama, Tomohiro Amemiya, Takuji Narumi, Hideaki Kuzuoka","doi":"10.1109/TVCG.2025.3588032","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3588032","url":null,"abstract":"<p><p>Redirected walking (RDW) is a method for exploring virtual spaces larger than physical spaces while preserving a natural walking sensation. Expanding the range of visual manipulation gains that can be applied without causing discomfort is necessary to apply RDW in practice. Ankle tendon electrical stimulation (TES) can expand the range by inducing body tilt sensation and sway. Therefore, in this study, we proposed a locomotion method that applies ankle TES to RDW. In Experiment 1, we evaluated the effect of TES on the detection threshold (DT), which is the maximal gain at which visual manipulation remains unnoticed. The results indicated that the DT was expanded when TES was applied to induce the body tilt sensation in the same direction as the RDW's visual manipulation. Specifically, the pooled mean of the DT was expanded by more than 18%. In Experiment 2, we evaluated the applicability, a supplementary index for assessing locomotion techniques. The results demonstrated that ankle TES mitigates the reduction of the applicability, especially under a curvature gain of ±0.3[m-1].</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weighted Squared Volume Minimization (WSVM) for Generating Uniform Tetrahedral Meshes. 基于加权平方体积最小化的均匀四面体网格生成方法。
IEEE transactions on visualization and computer graphics Pub Date : 2025-07-10 DOI: 10.1109/TVCG.2025.3587642
Kaixin Yu, Yifu Wang, Peng Song, Xiangqiao Meng, Ying He, Jianjun Chen
{"title":"Weighted Squared Volume Minimization (WSVM) for Generating Uniform Tetrahedral Meshes.","authors":"Kaixin Yu, Yifu Wang, Peng Song, Xiangqiao Meng, Ying He, Jianjun Chen","doi":"10.1109/TVCG.2025.3587642","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3587642","url":null,"abstract":"<p><p>This paper presents a new algorithm, Weighted Squared Volume Minimization (WSVM), for generating high-quality tetrahedral meshes from closed triangle meshes. Drawing inspiration from the principle of minimal surfaces that minimize squared surface area, WSVM employs a new energy function integrating weighted squared volumes for tetrahedral elements. When minimized with constant weights, this energy promotes uniform volumes among the tetrahedra. Adjusting the weights to account for local geometry further achieves uniform dihedral angles within the mesh. The algorithm begins with an initial tetrahedral mesh generated via Delaunay tetrahedralization and proceeds by sequentially minimizing volume-oriented and then dihedral angle-oriented energies. At each stage, it alternates between optimizing vertex positions and refining mesh connectivity through the iterative process. The algorithm operates fully automatically and requires no parameter tuning. Evaluations on a variety of 3D models demonstrate that WSVM consistently produces tetrahedral meshes of higher quality, with fewer slivers and enhanced uniformity compared to existing methods. Check out further details at the project webpage: https://kaixinyu-hub.github.io/WSVM.github.io/.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented Vision Systems: Paradigms and Applications. 增强视觉系统:范例和应用。
IEEE transactions on visualization and computer graphics Pub Date : 2025-07-09 DOI: 10.1109/TVCG.2025.3587527
Cristian Rendon-Cardona, Marie-Anne Burcklen, Richard Legras, Christian Sandor
{"title":"Augmented Vision Systems: Paradigms and Applications.","authors":"Cristian Rendon-Cardona, Marie-Anne Burcklen, Richard Legras, Christian Sandor","doi":"10.1109/TVCG.2025.3587527","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3587527","url":null,"abstract":"<p><p>Augmented Reality (AR) has grown from specialised uses to applications for the common public. One of these developments led to Augmented Vision (AV), which enhances vision beyond traditional methods like glasses or contact lenses. This review aims to compare and categorise AV systems according to the paradigms they implement to enhance the users' vision. Additionally, the review examines whether researchers conduct measurements and analysis on the human visual system (HVS) when evaluating their system. Such an overall view will help future researchers position their work on AV. By understanding AV systems' paradigms and approaches, researchers will be well-equipped to identify gaps, explore novel directions, and leverage existing advancements. We searched Scopus, Web of Science, and PubMed databases for publications until February 26, 2025, exploring citations and references for the selected articles to avoid missing out on relevant articles. We then conducted a two-step screening process that involved LLM-assisted screening of the article's abstracts and an in-depth assessment of the article. This review follows the PRISMA statement, reducing bias risk. We selected 113 of 469 articles, as they improved users' visual performance. We defined three main categories: (1) adding light to the incoming light field, (2) modifying the incoming light field, and (3) intersecting approaches. We found three main application areas: (1) task-specific, (2) vision correction, and (3) visual perception enhancement. The most typical application is task-specific. We identified a gap in the literature since just four of the papers we reviewed measured and analysed the accommodation while utilising the device.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Save It for the "hot" Day: An LLM-Empowered Visual Analytics System for Heat Risk Management. 把它保存到“炎热”的日子:一个法学硕士授权的热风险管理可视化分析系统。
IEEE transactions on visualization and computer graphics Pub Date : 2025-07-07 DOI: 10.1109/TVCG.2025.3586689
Haobo Li, Wong Kam-Kwai, Yan Luo, Juntong Chen, Chengzhong Liu, Yaxuan Zhang, Alexis Kai Hon Lau, Huamin Qu, Dongyu Liu
{"title":"Save It for the \"hot\" Day: An LLM-Empowered Visual Analytics System for Heat Risk Management.","authors":"Haobo Li, Wong Kam-Kwai, Yan Luo, Juntong Chen, Chengzhong Liu, Yaxuan Zhang, Alexis Kai Hon Lau, Huamin Qu, Dongyu Liu","doi":"10.1109/TVCG.2025.3586689","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3586689","url":null,"abstract":"<p><p>The escalating frequency and intensity of heat-related climate events, particularly heatwaves, emphasize the pressing need for advanced heat risk management strategies. Current approaches, primarily relying on numerical models, face challenges in spatial-temporal resolution and in capturing the dynamic interplay of environmental, social, and behavioral factors affecting heat risks. This has led to difficulties in translating risk assessments into effective mitigation actions. Recognizing these problems, we introduce a novel approach leveraging the burgeoning capabilities of Large Language Models (LLMs) to extract rich and contextual insights from news reports. We hence propose an LLM-empowered visual analytics system, Havior, that integrates the precise, data-driven insights of numerical models with nuanced news report information. This hybrid approach enables a more comprehensive assessment of heat risks and better identification, assessment, and mitigation of heat-related threats. The system incorporates novel visualization designs, such as \"thermoglyph\" and news glyph, enhancing intuitive understanding and analysis of heat risks. The integration of LLM-based techniques also enables advanced information retrieval and semantic knowledge extraction that can be guided by experts' analytics needs. We conducted an experiment on information extraction, a case study on the 2022 China Heatwave, and an expert survey & interview collaborated with six domain experts, demonstrating the usefulness of our system in providing in-depth and actionable insights for heat risk management.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyper-spherical Optimal Transport for Semantic Alignment in Text-to-3D End-to-end Generation. 文本到三维端到端生成中语义对齐的超球面最优传输。
IEEE transactions on visualization and computer graphics Pub Date : 2025-07-07 DOI: 10.1109/TVCG.2025.3586646
Zezeng Li, Weimin Wang, Yuming Zhao, Wenhai Li, Na Lei, Xianfeng Gu
{"title":"Hyper-spherical Optimal Transport for Semantic Alignment in Text-to-3D End-to-end Generation.","authors":"Zezeng Li, Weimin Wang, Yuming Zhao, Wenhai Li, Na Lei, Xianfeng Gu","doi":"10.1109/TVCG.2025.3586646","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3586646","url":null,"abstract":"<p><p>Recent CLIP-guided 3D generation methods have achieved promising results but struggle with generating faithful 3D shapes that conform with input text due to the gap between text and image embeddings. To this end, this paper proposes HOTS3D which makes the first attempt to effectively bridge this gap by aligning text features to the image features with spherical optimal transport (SOT). However, in high-dimensional situations, solving the SOT remains a challenge. To obtain the SOT map for high-dimensional features obtained from CLIP encoding of two modalities, we mathematically formulate and derive the solution based on Villani's theorem, which can directly align two hyper-sphere distributions without manifold exponential maps. Furthermore, we implement it by leveraging input convex neural networks (ICNNs) for the optimal Kantorovich potential. With the optimally mapped features, a diffusion-based generator is utilized to decode them into 3D shapes. Extensive quantitative and qualitative comparisons with state-of-the-art methods demonstrate the superiority of HOTS3D for text-to-3D generation, especially in the consistency with text semantics. The code will be publicly available.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Quality Metrics for Text-to-Image Generation. 文本到图像生成的质量度量研究。
IEEE transactions on visualization and computer graphics Pub Date : 2025-07-01 DOI: 10.1109/TVCG.2025.3585077
Sebastian Hartwig, Dominik Engel, Leon Sick, Hannah Kniesel, Tristan Payer, Poonam Poonam, Michael Glockler, Alex Bauerle, Timo Ropinski
{"title":"A Survey on Quality Metrics for Text-to-Image Generation.","authors":"Sebastian Hartwig, Dominik Engel, Leon Sick, Hannah Kniesel, Tristan Payer, Poonam Poonam, Michael Glockler, Alex Bauerle, Timo Ropinski","doi":"10.1109/TVCG.2025.3585077","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3585077","url":null,"abstract":"<p><p>AI-based text-to-image models do not only excel at generating realistic images, they also give designers more and more fine-grained control over the image content. Consequently, these approaches have gathered increased attention within the computer graphics research community, which has been historically devoted towards traditional rendering techniques, that offer precise control over scene parameters (e.g., objects, materials, and lighting). While the quality of conventionally rendered images is assessed through well established image quality metrics, such as SSIM or PSNR, the unique challenges of text-to-image generation require other, dedicated quality metrics. These metrics must be able to not only measure overall image quality, but also how well images reflect given text prompts, whereby the control of scene and rendering parameters is interweaved. Within this survey, we provide a comprehensive overview of such text-to-image quality metrics, and propose a taxonomy to categorize these metrics. Our taxonomy is grounded in the assumption, that there are two main quality criteria, namely compositional quality and general quality, that contribute to the overall image quality. Besides the metrics, this survey covers dedicated text-to-image benchmark datasets, over which the metrics are frequently computed. Finally, we identify limitations and open challenges in the field of text-to-image generation, and derive guidelines for practitioners conducting text-to-image evaluation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144546654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameterize Structure with Differentiable Template for 3D Shape Generation. 用可微模板参数化结构用于三维形状生成。
IEEE transactions on visualization and computer graphics Pub Date : 2025-06-27 DOI: 10.1109/TVCG.2025.3583987
Changfeng Ma, Pengxiao Guo, Shuangyu Yang, Yinuo Chen, Jie Guo, Chongjun Wang, Yanwen Guo, Wenping Wang
{"title":"Parameterize Structure with Differentiable Template for 3D Shape Generation.","authors":"Changfeng Ma, Pengxiao Guo, Shuangyu Yang, Yinuo Chen, Jie Guo, Chongjun Wang, Yanwen Guo, Wenping Wang","doi":"10.1109/TVCG.2025.3583987","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3583987","url":null,"abstract":"<p><p>Structural representation is crucial for reconstructing and generating editable 3D shapes with part semantics. Recent 3D shape generation works employ complicated networks and structure definitions relying on hierarchical annotations and pay less attention to the details inside parts. In this paper, we propose the method that parameterizes the shared structure in the same category using a differentiable template and corresponding fixed-length parameters. Specific parameters are fed into the template to calculate cuboids that indicate a concrete shape. We utilize the boundaries of three-view renderings of each cuboid to further describe the inside details. Shapes are represented with the parameters and three-view details inside cuboids, from which the SDF can be calculated to recover the object. Benefiting from our fixed-length parameters and three-view details, our networks for reconstruction and generation are simple and effective to learn the latent space. Our method can reconstruct or generate diverse shapes with complicated details, and interpolate them smoothly. Extensive evaluations demonstrate the superiority of our method on reconstruction from point cloud, generation, and interpolation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144512861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating User Input in Automated Object Placement for Augmented Reality. 在增强现实的自动对象放置中集成用户输入。
IEEE transactions on visualization and computer graphics Pub Date : 2025-06-27 DOI: 10.1109/TVCG.2025.3583745
Jalal Safari Bazargani, Abolghasem Sadeghi-Niaraki, Soo-Mi Choi
{"title":"Integrating User Input in Automated Object Placement for Augmented Reality.","authors":"Jalal Safari Bazargani, Abolghasem Sadeghi-Niaraki, Soo-Mi Choi","doi":"10.1109/TVCG.2025.3583745","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3583745","url":null,"abstract":"<p><p>Object placement in Augmented Reality (AR) is crucial for creating immersive and functional experiences. However, a critical research gap exists in combining user input with efficient automated placement, particularly in understanding spatial relationships and optimal placement. This study addresses this gap by presenting a novel object placement pipeline for AR applications that balances automation with user-directed placement. The pipeline employs entity recognition, object detection, depth estimation along with spawn area allocation to create a placement system. We compared our proposed method against manual placement in a comprehensive evaluation involving 50 participants. The evaluation included user experience questionnaires, a comparative study of task performance, and post-task interviews. Results indicate that our pipeline significantly reduces task completion time while maintaining comparable accuracy to manual placement. The UEQ-S and TENS scores revealed high user satisfaction. While manual placement offered more direct control, our method provided a more streamlined, efficient experience. This study contributes to the field of object placement in AR by demonstrating the potential of automated systems to enhance user experience and task efficiency.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144512860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信