Kazi Shahrukh Omar, Shuaijie Wang, Ridhuparan Kungumaraju, Tanvi Bhatt, Fabio Miranda
{"title":"VIGMA: An Open-Access Framework for Visual Gait and Motion Analytics.","authors":"Kazi Shahrukh Omar, Shuaijie Wang, Ridhuparan Kungumaraju, Tanvi Bhatt, Fabio Miranda","doi":"10.1109/TVCG.2025.3564866","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564866","url":null,"abstract":"<p><p>Gait disorders are commonly observed in older adults, who frequently experience various issues related to walking. Additionally, researchers and clinicians extensively investigate mobility related to gait in typically and atypically developing children, athletes, and individuals with orthopedic and neurological disorders. Effective gait analysis enables the understanding of the causal mechanisms of mobility and balance control of patients, the development of tailored treatment plans to improve mobility, the reduction of fall risk, and the tracking of rehabilitation progress. However, analyzing gait data is a complex task due to the multivariate nature of the data, the large volume of information to be interpreted, and the technical skills required. Existing tools for gait analysis are often limited to specific patient groups (e.g., cerebral palsy), only handle a specific subset of tasks in the entire workflow, and are not openly accessible. To address these shortcomings, we conducted a requirements assessment with gait practitioners (e.g., researchers, clinicians) via surveys and identified key components of the workflow, including (1) data processing and (2) data analysis and visualization. Based on the findings, we designed VIGMA, an open-access visual analytics framework integrated with computational notebooks and a Python library, to meet the identified requirements. Notably, the framework supports analytical capabilities for assessing disease progression and for comparing multiple patient groups. We validated the framework through usage scenarios with experts specializing in gait and mobility rehabilitation. VIGMA is available at https://github.com/komar41/vigma.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144012217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE VR 2025 Introducing the Special Issue","authors":"Han-Wei Shen;Kiyoshi Kiyokawa;Maud Marchal","doi":"10.1109/TVCG.2025.3544902","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544902","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"x-x"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977647","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces","authors":"","doi":"10.1109/TVCG.2025.3544911","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544911","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xvii-xxix"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977056","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE VR 2025 International Program Super Committee","authors":"","doi":"10.1109/TVCG.2025.3544901","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544901","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xv-xvi"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977058","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE VR 2025 Message from the Program Chairs and Guest Editors","authors":"Daisuke Iwai;Luciana Nedel;Tabitha Peck;Voicu Popescu","doi":"10.1109/TVCG.2025.3544889","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544889","url":null,"abstract":"In this special issue of IEEE Transactions on Visualization and Computer Graphics (TVCG), we are pleased to present the top papers from the 32nd IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2025), held March 8–12, 2025, in Saint-Malo, France.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"xi-xii"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977648","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DCINR: a Divide-and-Conquer Implicit Neural Representation for Compressing Time-Varying Volumetric Data in Hours.","authors":"Jun Han, Fan Yang","doi":"10.1109/TVCG.2025.3564255","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564255","url":null,"abstract":"<p><p>Implicit neural representation (INR) has been a powerful paradigm for effectively compressing time-varying volumetric data. However, the optimization process can span days or even weeks due to its reliance on coordinate-based inputs and outputs for modeling volumetric data. To address this issue, we introduce a divide-and-conquer INR (DCINR), significantly accelerating the compressing process of time-varying volumetric data in hours. Our approach starts by dividing the data set into a set of non-overlapping blocks. Then, we apply a block selection strategy to weed out redundant blocks to reduce the computation cost without sacrificing performance. In parallel, each selected block is modeled by a tiny INR, with the size of the INR being adapted to match the information richness in the block. The block size is determined by maximizing the average network capacity. After optimization, the optimized INRs are utilized to decompress the data set. By evaluating our approach across various time-varying volumetric data sets, DCINR surpasses learning-based and lossy compression approaches in compression ratio, visual fidelity, and various performance metrics. Additionally, this method operates within a comparable compression time to that of lossy compressors, achieves extreme compression ratios ranging from thousands to tens of thousands, and preserves features with high quality.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144055575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Visualization and Computer Graphics: 2025 IEEE Conference on Virtual Reality and 3D User Interfaces","authors":"","doi":"10.1109/TVCG.2025.3544887","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3544887","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 5","pages":"i-ii"},"PeriodicalIF":0.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10977055","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143883393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intrinsic Decomposition with Robustly Separating and Restoring Colored Illumination.","authors":"Hao Sha, Shining Ma, Tongtai Cao, Yu Han, Yu Liu, Yue Liu","doi":"10.1109/TVCG.2025.3564229","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3564229","url":null,"abstract":"<p><p>Intrinsic decomposition separates an image into reflectance and shading, which contributes to image editing, augmented reality, etc. Despite recent efforts dedicated to this field, effectively separating colored illumination from reflectance and correctly restoring it into shading remains an challenge. We propose a deep intrinsic decomposition method to address this issue. Specifically, by transforming intrinsic decomposition process in RGB image domains into the combination of intensity and chromaticity domains, we propose a novel macro intrinsic decomposition network framework. This framework enables the generation of finer intrinsic components through more relevant features propagation and more detailed sub-constraints guidance. In order to expand the macro network, we integrate multiple attention mechanism modules in key positions of encoders, which enhances the extraction of distinct features. We also propose a skip connection module based on specific deep features guidance, which can filter out features that are physically irrelevant to each intrinsic component. Our method not only outperforms state-of-the-art methods across multiple datasets, but also robustly separates illumination from reflectance and restores it into shading in various types of images. By leveraging our intrinsic images, we achieve visually superior image editing effects compared to other methods, while also being able to manipulate the inherent lighting of the original scene.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}