{"title":"多传感器活动识别的智能数据标注工具","authors":"Alexander Diete, T. Sztyler, H. Stuckenschmidt","doi":"10.1109/PERCOMW.2017.7917542","DOIUrl":null,"url":null,"abstract":"Annotation of multimodal data sets is often a time consuming and a challenging task as many approaches require an accurate labeling. This includes in particular video recordings as often labeling exact to a frame is required. For that purpose, we created an annotation tool that enables to annotate data sets of video and inertial sensor data. However, in contrast to the most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. More precisely, after labeling a small set of instances our system is able to provide labeling recommendations and in turn it makes learning of image features more feasible by speeding up the labeling time for single frames. We aim to rely on the inertial sensors of our wristband to support the labeling of video recordings. For that purpose, we apply template matching in context of dynamic time warping to identify time intervals of certain actions. To investigate the feasibility of our approach we focus on a real world scenario, i.e., we gathered a data set which describes an order picking scenario of a logistic company. In this context, we focus on the picking process as the selection of the correct items can be prone to errors. Preliminary results show that we are able to identify 69% of the grabbing motion periods of time.","PeriodicalId":319638,"journal":{"name":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"661 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"A smart data annotation tool for multi-sensor activity recognition\",\"authors\":\"Alexander Diete, T. Sztyler, H. Stuckenschmidt\",\"doi\":\"10.1109/PERCOMW.2017.7917542\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Annotation of multimodal data sets is often a time consuming and a challenging task as many approaches require an accurate labeling. This includes in particular video recordings as often labeling exact to a frame is required. For that purpose, we created an annotation tool that enables to annotate data sets of video and inertial sensor data. However, in contrast to the most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. More precisely, after labeling a small set of instances our system is able to provide labeling recommendations and in turn it makes learning of image features more feasible by speeding up the labeling time for single frames. We aim to rely on the inertial sensors of our wristband to support the labeling of video recordings. For that purpose, we apply template matching in context of dynamic time warping to identify time intervals of certain actions. To investigate the feasibility of our approach we focus on a real world scenario, i.e., we gathered a data set which describes an order picking scenario of a logistic company. In this context, we focus on the picking process as the selection of the correct items can be prone to errors. Preliminary results show that we are able to identify 69% of the grabbing motion periods of time.\",\"PeriodicalId\":319638,\"journal\":{\"name\":\"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)\",\"volume\":\"661 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-03-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PERCOMW.2017.7917542\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PERCOMW.2017.7917542","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A smart data annotation tool for multi-sensor activity recognition
Annotation of multimodal data sets is often a time consuming and a challenging task as many approaches require an accurate labeling. This includes in particular video recordings as often labeling exact to a frame is required. For that purpose, we created an annotation tool that enables to annotate data sets of video and inertial sensor data. However, in contrast to the most existing approaches, we focus on semi-supervised labeling support to infer labels for the whole dataset. More precisely, after labeling a small set of instances our system is able to provide labeling recommendations and in turn it makes learning of image features more feasible by speeding up the labeling time for single frames. We aim to rely on the inertial sensors of our wristband to support the labeling of video recordings. For that purpose, we apply template matching in context of dynamic time warping to identify time intervals of certain actions. To investigate the feasibility of our approach we focus on a real world scenario, i.e., we gathered a data set which describes an order picking scenario of a logistic company. In this context, we focus on the picking process as the selection of the correct items can be prone to errors. Preliminary results show that we are able to identify 69% of the grabbing motion periods of time.