IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
VIGMA: An Open-Access Framework for Visual Gait and Motion Analytics. VIGMA:用于视觉步态和运动分析的开放存取框架。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-28 DOI: 10.1109/TVCG.2025.3564866
Kazi Shahrukh Omar, Shuaijie Wang, Ridhuparan Kungumaraju, Tanvi Bhatt, Fabio Miranda
{"title":"VIGMA: An Open-Access Framework for Visual Gait and Motion Analytics.","authors":"Kazi Shahrukh Omar, Shuaijie Wang, Ridhuparan Kungumaraju, Tanvi Bhatt, Fabio Miranda","doi":"10.1109/TVCG.2025.3564866","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564866","url":null,"abstract":"<p><p>Gait disorders are commonly observed in older adults, who frequently experience various issues related to walking. Additionally, researchers and clinicians extensively investigate mobility related to gait in typically and atypically developing children, athletes, and individuals with orthopedic and neurological disorders. Effective gait analysis enables the understanding of the causal mechanisms of mobility and balance control of patients, the development of tailored treatment plans to improve mobility, the reduction of fall risk, and the tracking of rehabilitation progress. However, analyzing gait data is a complex task due to the multivariate nature of the data, the large volume of information to be interpreted, and the technical skills required. Existing tools for gait analysis are often limited to specific patient groups (e.g., cerebral palsy), only handle a specific subset of tasks in the entire workflow, and are not openly accessible. To address these shortcomings, we conducted a requirements assessment with gait practitioners (e.g., researchers, clinicians) via surveys and identified key components of the workflow, including (1) data processing and (2) data analysis and visualization. Based on the findings, we designed VIGMA, an open-access visual analytics framework integrated with computational notebooks and a Python library, to meet the identified requirements. Notably, the framework supports analytical capabilities for assessing disease progression and for comparing multiple patient groups. We validated the framework through usage scenarios with experts specializing in gait and mobility rehabilitation. VIGMA is available at https://github.com/komar41/vigma.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144012217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE VR 2025 Introducing the Special Issue IEEE VR 2025专题介绍
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544902
Han-Wei Shen;Kiyoshi Kiyokawa;Maud Marchal
{"title":"IEEE VR 2025 Introducing the Special Issue","authors":"Han-Wei Shen;Kiyoshi Kiyokawa;Maud Marchal","doi":"10.1109/TVCG.2025.3544902","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544902","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"x-x"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977647","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces IEEE可视化与计算机图形学汇刊:2025年IEEE虚拟现实与3D用户界面会议
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544911
{"title":"IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces","authors":"","doi":"10.1109/TVCG.2025.3544911","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544911","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xvii-xxix"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977056","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE VR 2025 Steering Committee Members IEEE VR 2025指导委员会成员
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544899
{"title":"IEEE VR 2025 Steering Committee Members","authors":"","doi":"10.1109/TVCG.2025.3544899","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544899","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xiv-xiv"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977054","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE VR 2025 International Program Super Committee IEEE VR 2025国际项目超级委员会
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544901
{"title":"IEEE VR 2025 International Program Super Committee","authors":"","doi":"10.1109/TVCG.2025.3544901","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544901","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xv-xvi"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977058","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE VR 2025 Message from the Program Chairs and Guest Editors IEEE VR 2025项目主席和客座编辑的信息
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544889
Daisuke Iwai;Luciana Nedel;Tabitha Peck;Voicu Popescu
{"title":"IEEE VR 2025 Message from the Program Chairs and Guest Editors","authors":"Daisuke Iwai;Luciana Nedel;Tabitha Peck;Voicu Popescu","doi":"10.1109/TVCG.2025.3544889","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544889","url":null,"abstract":"In this special issue of IEEE Transactions on Visualization and Computer Graphics (TVCG), we are pleased to present the top papers from the 32nd IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2025), held March 8–12, 2025, in Saint-Malo, France.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xi-xii"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977648","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE VR 2025 Visualization and Graphics Technical Committee (VGTC) Statement IEEE VR 2025可视化和图形技术委员会(VGTC)声明
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544900
{"title":"IEEE VR 2025 Visualization and Graphics Technical Committee (VGTC) Statement","authors":"","doi":"10.1109/TVCG.2025.3544900","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544900","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xiii-xiii"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCINR: a Divide-and-Conquer Implicit Neural Representation for Compressing Time-Varying Volumetric Data in Hours. DCINR:一种分而治之的隐式神经表示,用于压缩时变体积数据。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3564255
Jun Han, Fan Yang
{"title":"DCINR: a Divide-and-Conquer Implicit Neural Representation for Compressing Time-Varying Volumetric Data in Hours.","authors":"Jun Han, Fan Yang","doi":"10.1109/TVCG.2025.3564255","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564255","url":null,"abstract":"<p><p>Implicit neural representation (INR) has been a powerful paradigm for effectively compressing time-varying volumetric data. However, the optimization process can span days or even weeks due to its reliance on coordinate-based inputs and outputs for modeling volumetric data. To address this issue, we introduce a divide-and-conquer INR (DCINR), significantly accelerating the compressing process of time-varying volumetric data in hours. Our approach starts by dividing the data set into a set of non-overlapping blocks. Then, we apply a block selection strategy to weed out redundant blocks to reduce the computation cost without sacrificing performance. In parallel, each selected block is modeled by a tiny INR, with the size of the INR being adapted to match the information richness in the block. The block size is determined by maximizing the average network capacity. After optimization, the optimized INRs are utilized to decompress the data set. By evaluating our approach across various time-varying volumetric data sets, DCINR surpasses learning-based and lossy compression approaches in compression ratio, visual fidelity, and various performance metrics. Additionally, this method operates within a comparable compression time to that of lossy compressors, achieves extreme compression ratios ranging from thousands to tens of thousands, and preserves features with high quality.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144055575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces IEEE可视化与计算机图形学汇刊:2025年IEEE虚拟现实与3D用户界面会议
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-25 DOI: 10.1109/TVCG.2025.3544887
{"title":"IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces","authors":"","doi":"10.1109/TVCG.2025.3544887","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544887","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"i-ii"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intrinsic Decomposition with Robustly Separating and Restoring Colored Illumination. 具有鲁棒性分离和恢复彩色照明的内禀分解。
IEEE transactions on visualization and computer graphics Pub Date : 2025-04-24 DOI: 10.1109/TVCG.2025.3564229
Hao Sha, Shining Ma, Tongtai Cao, Yu Han, Yu Liu, Yue Liu
{"title":"Intrinsic Decomposition with Robustly Separating and Restoring Colored Illumination.","authors":"Hao Sha, Shining Ma, Tongtai Cao, Yu Han, Yu Liu, Yue Liu","doi":"10.1109/TVCG.2025.3564229","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564229","url":null,"abstract":"<p><p>Intrinsic decomposition separates an image into reflectance and shading, which contributes to image editing, augmented reality, etc. Despite recent efforts dedicated to this field, effectively separating colored illumination from reflectance and correctly restoring it into shading remains an challenge. We propose a deep intrinsic decomposition method to address this issue. Specifically, by transforming intrinsic decomposition process in RGB image domains into the combination of intensity and chromaticity domains, we propose a novel macro intrinsic decomposition network framework. This framework enables the generation of finer intrinsic components through more relevant features propagation and more detailed sub-constraints guidance. In order to expand the macro network, we integrate multiple attention mechanism modules in key positions of encoders, which enhances the extraction of distinct features. We also propose a skip connection module based on specific deep features guidance, which can filter out features that are physically irrelevant to each intrinsic component. Our method not only outperforms state-of-the-art methods across multiple datasets, but also robustly separates illumination from reflectance and restores it into shading in various types of images. By leveraging our intrinsic images, we achieve visually superior image editing effects compared to other methods, while also being able to manipulate the inherent lighting of the original scene.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信