Jacob W. Kamminga, Michael D. Jones, Kevin Seppi, N. Meratnia, P. Havinga
{"title":"Synchronization between Sensors and Cameras in Movement Data Labeling Frameworks","authors":"Jacob W. Kamminga, Michael D. Jones, Kevin Seppi, N. Meratnia, P. Havinga","doi":"10.1145/3359427.3361920","DOIUrl":null,"url":null,"abstract":"Obtaining labeled data for activity recognition tasks is a tremendously time consuming, tedious, and labor-intensive task. Often, ground-truth video of the activity is recorded along with sensordata recorded during the activity. The data must be synchronized with the recorded video to be useful. In this paper, we present and compare two labeling frameworks that each has a different approach to synchronization. Approach A uses time-stamped visual indicators positioned on the data loggers. The approach results in accurate synchronization between video and data but adds more overhead and is not practical when using multiple sensors, subjects, and cameras simultaneously. Also, synchronization needs to be redone for each recording session. Approach B uses Real-Time Clocks (RTCs) on the devices for synchronization, which is less accurate but has several advantages: multiple subjects can be recorded on various cameras, it becomes easier to collect more data, and synchronization only needs to be done once across multiple recording sessions. Therefore, it is easier to collect more data which increases the probability of capturing an unusual activity. The best way forward is likely a combination of both approaches.","PeriodicalId":267440,"journal":{"name":"Proceedings of the 2nd Workshop on Data Acquisition To Analysis","volume":"161 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd Workshop on Data Acquisition To Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3359427.3361920","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Obtaining labeled data for activity recognition tasks is a tremendously time consuming, tedious, and labor-intensive task. Often, ground-truth video of the activity is recorded along with sensordata recorded during the activity. The data must be synchronized with the recorded video to be useful. In this paper, we present and compare two labeling frameworks that each has a different approach to synchronization. Approach A uses time-stamped visual indicators positioned on the data loggers. The approach results in accurate synchronization between video and data but adds more overhead and is not practical when using multiple sensors, subjects, and cameras simultaneously. Also, synchronization needs to be redone for each recording session. Approach B uses Real-Time Clocks (RTCs) on the devices for synchronization, which is less accurate but has several advantages: multiple subjects can be recorded on various cameras, it becomes easier to collect more data, and synchronization only needs to be done once across multiple recording sessions. Therefore, it is easier to collect more data which increases the probability of capturing an unusual activity. The best way forward is likely a combination of both approaches.