IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Real-and-Present: Investigating the Use of Life-Size 2D Video Avatars in HMD-Based AR Teleconferencing. 真实与呈现:研究在基于 HMD 的 AR 电话会议中使用真人大小的 2D 视频头像。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466554
Xuanyu Wang, Weizhan Zhang, Christian Sandor, Hongbo Fu
{"title":"Real-and-Present: Investigating the Use of Life-Size 2D Video Avatars in HMD-Based AR Teleconferencing.","authors":"Xuanyu Wang, Weizhan Zhang, Christian Sandor, Hongbo Fu","doi":"10.1109/TVCG.2024.3466554","DOIUrl":"10.1109/TVCG.2024.3466554","url":null,"abstract":"<p><p>Augmented Reality (AR) teleconferencing allows spatially distributed users to interact with each other in 3D through agents in their own physical environments. Existing methods leveraging volumetric capturing and reconstruction can provide a high-fidelity experience but are often too complex and expensive for everyday use. Other solutions target mobile and effortless-to-setup teleconferencing on AR Head Mounted Displays (HMD). They directly transplant the conventional video conferencing onto an AR-HMD platform or use avatars to represent remote participants. However, they can only support either a high fidelity or a high level of co-presence. Moreover, the limited Field of View (FoV) of HMDs could further degrade users' immersive experience. To achieve a balance between fidelity and co-presence, we explore using life-size 2D video-based avatars (video avatars for short) in AR teleconferencing. Specifically, with the potential effect of FoV on users' perception of proximity, we first conducted a pilot study to explore the local-user-centered optimal placement of video avatars in small-group AR conversations. With the placement results, we then implement a proof-of-concept prototype of video-avatar-based teleconferencing. We conduct user evaluations with our prototype to verify its effectiveness in balancing fidelity and co-presence. Following the indication in the pilot study, we further quantitatively explore the effect of FoV size on the video avatar's optimal placement through a user study involving more FoV conditions in a VR-simulated environment. We regress placement models to serve as references for computationally determining video avatar placements in such teleconferencing applications on various existing AR HMDs and future ones with bigger FoVs.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing Search Regions for Fast Detection of Exact Point-to-Point Geodesic Paths on Meshes. 减少搜索区域,快速检测网格上精确的点对点大地路径
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466242
Shuai Ma, Wencheng Wang, Fei Hou
{"title":"Reducing Search Regions for Fast Detection of Exact Point-to-Point Geodesic Paths on Meshes.","authors":"Shuai Ma, Wencheng Wang, Fei Hou","doi":"10.1109/TVCG.2024.3466242","DOIUrl":"10.1109/TVCG.2024.3466242","url":null,"abstract":"<p><p>Fast detection of exact point-to-point geodesic paths on meshes is still challenging with existing methods. For this, we present a method to reduce the region to be investigated on the mesh for efficiency. It is by our observation that a mesh and its simplified one are very alike so that the geodesic path between two defined points on the mesh and the geodesic path between their corresponding two points on the simplified mesh are very near to each other in the 3D Euclidean space. Thus, with the geodesic path on the simplified mesh, we can generate a region on the original mesh that contains the geodesic path on the mesh, called the search region, by which existing methods can reduce the search scope in detecting geodesic paths, and so obtaining acceleration. We demonstrate the rationale behind our proposed method. Experimental results show that we can promote existing methods well, e.g., the global exact method VTP (vertex-oriented triangle propagation) can be sped up by even over 200 times when handling large meshes. Our search region can also speed up path initialization using the Dijkstra algorithm to promote local methods, e.g., obtaining an acceleration of at least two times in our tests.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing — A Design Study 代达罗斯数据医疗制造中粒子的探索、知识外部化和标记--一项设计研究。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456329
Alexander Wyss;Gabriela Morgenshtern;Amanda Hirsch-Hüsler;Jürgen Bernard
{"title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing — A Design Study","authors":"Alexander Wyss;Gabriela Morgenshtern;Amanda Hirsch-Hüsler;Jürgen Bernard","doi":"10.1109/TVCG.2024.3456329","DOIUrl":"10.1109/TVCG.2024.3456329","url":null,"abstract":"In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics consumables poses a significant threat to patients. Objective data-driven decision-making on the severity of contamination is key for reducing patient risk, while saving time and cost in quality assessment. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings in the current process are limitations in exploring thousands of images, data-driven decision making, and ineffective knowledge externalization. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study's learnings, and a generalizable framework for knowledge externalization. DaedalusData is a visual analytics system that enables domain experts to explore particle contamination patterns, label particles in label alphabets, and externalize knowledge through semi-supervised label-informed data projections. The results of our case study and user study show high usability of DaedalusData and its efficient support of experts in generating comprehensive overviews of thousands of particles, labeling of large quantities of particles, and externalizing knowledge to augment the dataset further. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalability and trade-offs that come with the adoption of this approach in practice.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"54-64"},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Enhancing Low Vision Usability of Data Charts on Smartphones 提高智能手机数据图表的低视力可用性。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-20 DOI: 10.1109/TVCG.2024.3456348
Yash Prakash;Pathan Aseef Khan;Akshay Kolgar Nayak;Sampath Jayarathna;Hae-Na Lee;Vikas Ashok
{"title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones","authors":"Yash Prakash;Pathan Aseef Khan;Akshay Kolgar Nayak;Sampath Jayarathna;Hae-Na Lee;Vikas Ashok","doi":"10.1109/TVCG.2024.3456348","DOIUrl":"10.1109/TVCG.2024.3456348","url":null,"abstract":"The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically “see” the chart content. They ascribed these challenges mainly to the magnification-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"853-863"},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization Atlases: Explaining and Exploring Complex Topics Through Data, Visualization, and Narration 可视化地图集:通过数据、可视化和叙述来解释和探索复杂的主题。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-20 DOI: 10.1109/TVCG.2024.3456311
Jinrui Wang;Xinhuan Shu;Benjamin Bach;Uta Hinrichs
{"title":"Visualization Atlases: Explaining and Exploring Complex Topics Through Data, Visualization, and Narration","authors":"Jinrui Wang;Xinhuan Shu;Benjamin Bach;Uta Hinrichs","doi":"10.1109/TVCG.2024.3456311","DOIUrl":"10.1109/TVCG.2024.3456311","url":null,"abstract":"This paper defines, analyzes, and discusses the emerging genre of visualization atlases. We currently witness an increase in web-based, data-driven initiatives that call themselves “atlases” while explaining complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. To understand this emerging genre and inform their design, study, and authoring support, we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of a visualization atlas as a compendium of (web) pages aimed at explaining and supporting exploration of data about a dedicated topic through data, visualizations and narration. (2) a set of design patterns of 8 design dimensions, (3) insights into the atlas creation from interviews and (4) the definition of 5 visualization atlas genres. We found that visualization atlases are unique in the way they combine i) exploratory visualization, ii) narrative elements from data-driven storytelling and iii) structured navigation mechanisms. They target a wide range of audiences with different levels of domain knowledge, acting as tools for study, communication, and discovery. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"437-447"},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization of CNNs on Relational Reasoning With Bar Charts. 利用条形图进行关系推理的 CNN 通用化。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3463800
Zhenxing Cui, Lu Chen, Yunhai Wang, Daniel Haehn, Yong Wang, Hanspeter Pfister
{"title":"Generalization of CNNs on Relational Reasoning With Bar Charts.","authors":"Zhenxing Cui, Lu Chen, Yunhai Wang, Daniel Haehn, Yong Wang, Hanspeter Pfister","doi":"10.1109/TVCG.2024.3463800","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3463800","url":null,"abstract":"<p><p>This paper presents a systematic study of the generalization of convolutional neural networks (CNNs) and humans on relational reasoning tasks with bar charts. We first revisit previous experiments on graphical perception and update the benchmark performance of CNNs. We then test the generalization performance of CNNs on a classic relational reasoning task: estimating bar length ratios in a bar chart, by progressively perturbing the standard visualizations. We further conduct a user study to compare the performance of CNNs and humans. Our results show that CNNs outperform humans only when the training and test data have the same visual encodings. Otherwise, they may perform worse. We also find that CNNs are sensitive to perturbations in various visual encodings, regardless of their relevance to the target bars. Yet, humans are mainly influenced by bar lengths. Our study suggests that robust relational reasoning with visualizations is challenging for CNNs. Improving CNNs' generalization performance may require training them to better recognize task-related visual properties.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Complementary Filter for Hybrid Inside-Out Outside-In HMD Tracking With Smooth Transitions 自适应互补滤波器用于具有平滑过渡功能的内外混合 HMD 跟踪。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3464738
Riccardo Monica;Dario Lodi Rizzini;Jacopo Aleotti
{"title":"Adaptive Complementary Filter for Hybrid Inside-Out Outside-In HMD Tracking With Smooth Transitions","authors":"Riccardo Monica;Dario Lodi Rizzini;Jacopo Aleotti","doi":"10.1109/TVCG.2024.3464738","DOIUrl":"10.1109/TVCG.2024.3464738","url":null,"abstract":"Head-mounted displays (HMDs) in room-scale virtual reality are usually tracked using inside-out visual SLAM algorithms. Alternatively, to track the motion of the HMD with respect to a fixed real-world reference frame, an outside-in instrumentation like a motion capture system can be adopted. However, outside-in tracking systems may temporarily lose tracking as they suffer by occlusion and blind spots. A possible solution is to adopt a hybrid approach where the inside-out tracker of the HMD is augmented with an outside-in sensing system. On the other hand, when the tracking signal of the outside-in system is recovered after a loss of tracking the transition from inside-out tracking to hybrid tracking may generate a discontinuity, i.e a sudden change of the virtual viewpoint, that can be uncomfortable for the user. Therefore, hybrid tracking solutions for HMDs require advanced sensor fusion algorithms to obtain a smooth transition. This work proposes a method for hybrid tracking of a HMD with smooth transitions based on an adaptive complementary filter. The proposed approach can be configured with several parameters that determine a trade-off between user experience and tracking error. A user study was carried out in a room-scale virtual reality environment, where users carried out two different tasks while multiple signal tracking losses of the outside-in sensor system occurred. The results show that the proposed approach improves user experience compared to a standard Extended Kalman Filter, and that tracking error is lower compared to a state-of-the-art complementary filter when configured for the same quality of user experience.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 2","pages":"1598-1612"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10684565","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid and Precise Topological Comparison with Merge Tree Neural Networks 利用合并树神经网络进行快速精确的拓扑比较。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3456395
Yu Qin;Brittany Terese Fasy;Carola Wenk;Brian Summa
{"title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks","authors":"Yu Qin;Brittany Terese Fasy;Carola Wenk;Brian Summa","doi":"10.1109/TVCG.2024.3456395","DOIUrl":"10.1109/TVCG.2024.3456395","url":null,"abstract":"Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1322-1332"},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Who Let the Guards Out: Visual Support for Patrolling Games 谁放走了守卫?巡逻游戏的视觉支持
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-18 DOI: 10.1109/TVCG.2024.3456306
Matěj Lang;Adam Štěpánek;Róbert Zvara;Vojtěch Řehák;Barbora Kozlíková
{"title":"Who Let the Guards Out: Visual Support for Patrolling Games","authors":"Matěj Lang;Adam Štěpánek;Róbert Zvara;Vojtěch Řehák;Barbora Kozlíková","doi":"10.1109/TVCG.2024.3456306","DOIUrl":"10.1109/TVCG.2024.3456306","url":null,"abstract":"Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"34-43"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Level Task Framework for Event Sequence Analysis 事件序列分析的多层次任务框架
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-18 DOI: 10.1109/TVCG.2024.3456510
Kazi Tasnim Zinat;Saimadhav Naga Sakhamuri;Aaron Sun Chen;Zhicheng Liu
{"title":"A Multi-Level Task Framework for Event Sequence Analysis","authors":"Kazi Tasnim Zinat;Saimadhav Naga Sakhamuri;Aaron Sun Chen;Zhicheng Liu","doi":"10.1109/TVCG.2024.3456510","DOIUrl":"10.1109/TVCG.2024.3456510","url":null,"abstract":"Despite the development of numerous visual analytics tools for event sequence data across various domains, including but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on multivariate datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analytics, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and manage provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that each technique can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework's descriptive power through case studies and discuss its similarities and differences with previous event sequence task taxonomies.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"842-852"},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信