{"title":"Towards Systematic Benchmarking of Activity Recognition Algorithms","authors":"T. Sztyler, Christian Meilicke, H. Stuckenschmidt","doi":"10.1109/PERCOMW.2018.8480409","DOIUrl":null,"url":null,"abstract":"In this paper we propose a benchmarking frame-work for evaluating activity recognition methods. We use an ontology for describing activity recognition datasets on the meta-level and propose a fine-grained annotation scheme for activity recognition datasets. Given a method that implements a defined interface, an evaluation client can be used to automatically run experiments on annotated datasets. Our framework helps to find relevant datasets and makes results reproducible by fixing concrete experimental settings. We show how to use the framework and report about a preliminary evaluation experiment.","PeriodicalId":190096,"journal":{"name":"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PERCOMW.2018.8480409","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper we propose a benchmarking frame-work for evaluating activity recognition methods. We use an ontology for describing activity recognition datasets on the meta-level and propose a fine-grained annotation scheme for activity recognition datasets. Given a method that implements a defined interface, an evaluation client can be used to automatically run experiments on annotated datasets. Our framework helps to find relevant datasets and makes results reproducible by fixing concrete experimental settings. We show how to use the framework and report about a preliminary evaluation experiment.