Lishuang Zhan , Rongting Li , Rui Cao , Juncong Lin , Shihui Guo
{"title":"VisMocap:多源动作捕捉数据的交互式可视化和分析","authors":"Lishuang Zhan , Rongting Li , Rui Cao , Juncong Lin , Shihui Guo","doi":"10.1016/j.visinf.2025.100235","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid advancement of artificial intelligence, research on enabling computers to assist humans in achieving intelligent augmentation—thereby enhancing the accuracy and efficiency of information perception and processing—has been steadily evolving. Among these developments, innovations in human motion capture technology have been emerging rapidly, leading to an increasing diversity in motion capture data types. This diversity necessitates the establishment of a unified standard for multi-source data to facilitate effective analysis and comparison of their capability to represent human motion. Additionally, motion capture data often suffer from significant noise, acquisition delays, and asynchrony, making their effective processing and visualization a critical challenge. In this paper, we utilized data collected from a prototype of flexible fabric-based motion capture clothing and optical motion capture devices as inputs. Time synchronization and error analysis between the two data types were conducted, individual actions from continuous motion sequences were segmented, and the processed results were presented through a concise and intuitive visualization interface. Finally, we evaluated various system metrics, including the accuracy of time synchronization, data fitting error from fabric resistance to joint angles, precision of motion segmentation, and user feedback.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 2","pages":"Article 100235"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"VisMocap: Interactive visualization and analysis for multi-source motion capture data\",\"authors\":\"Lishuang Zhan , Rongting Li , Rui Cao , Juncong Lin , Shihui Guo\",\"doi\":\"10.1016/j.visinf.2025.100235\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the rapid advancement of artificial intelligence, research on enabling computers to assist humans in achieving intelligent augmentation—thereby enhancing the accuracy and efficiency of information perception and processing—has been steadily evolving. Among these developments, innovations in human motion capture technology have been emerging rapidly, leading to an increasing diversity in motion capture data types. This diversity necessitates the establishment of a unified standard for multi-source data to facilitate effective analysis and comparison of their capability to represent human motion. Additionally, motion capture data often suffer from significant noise, acquisition delays, and asynchrony, making their effective processing and visualization a critical challenge. In this paper, we utilized data collected from a prototype of flexible fabric-based motion capture clothing and optical motion capture devices as inputs. Time synchronization and error analysis between the two data types were conducted, individual actions from continuous motion sequences were segmented, and the processed results were presented through a concise and intuitive visualization interface. Finally, we evaluated various system metrics, including the accuracy of time synchronization, data fitting error from fabric resistance to joint angles, precision of motion segmentation, and user feedback.</div></div>\",\"PeriodicalId\":36903,\"journal\":{\"name\":\"Visual Informatics\",\"volume\":\"9 2\",\"pages\":\"Article 100235\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-03-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Visual Informatics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468502X25000166\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X25000166","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
VisMocap: Interactive visualization and analysis for multi-source motion capture data
With the rapid advancement of artificial intelligence, research on enabling computers to assist humans in achieving intelligent augmentation—thereby enhancing the accuracy and efficiency of information perception and processing—has been steadily evolving. Among these developments, innovations in human motion capture technology have been emerging rapidly, leading to an increasing diversity in motion capture data types. This diversity necessitates the establishment of a unified standard for multi-source data to facilitate effective analysis and comparison of their capability to represent human motion. Additionally, motion capture data often suffer from significant noise, acquisition delays, and asynchrony, making their effective processing and visualization a critical challenge. In this paper, we utilized data collected from a prototype of flexible fabric-based motion capture clothing and optical motion capture devices as inputs. Time synchronization and error analysis between the two data types were conducted, individual actions from continuous motion sequences were segmented, and the processed results were presented through a concise and intuitive visualization interface. Finally, we evaluated various system metrics, including the accuracy of time synchronization, data fitting error from fabric resistance to joint angles, precision of motion segmentation, and user feedback.