{"title":"Atlas -在人机交互场景中使用部分监督学习和多视图共同学习的注释工具","authors":"S. Meudt, Lutz Bigalke, F. Schwenker","doi":"10.1109/ISSPA.2012.6310495","DOIUrl":null,"url":null,"abstract":"In this paper we present ATLAS, a new graphical tool for annotation of multi-modal data streams. Although Atlas has been developed for data bases collected in human computer interaction (HCI) scenarios, it is applicable for multimodal time series in general settings. In our HCI scenario, besides multi-channel audio and video inputs, various bio-physiological data has been recorded, e.g. complex multi-variate signals such as ECG, EEG, EMG as well as simple uni-variate skin conductivity, respiration, blood volume pulse, etc. All these different types of data can be processed through ATLAS. In addition to processing raw data, intermediate data processing results, such as extracted features, and even (probabilistic or crisp) outputs of pre-trained classifier modules can be displayed. Furthermore, annotation and transcription tools have been implemented. ATLAS's basic structure is briefly described. Besides these basic annotation features, active learning (active data selection) approaches have been included into the overall system. Support Vector Machines (SVM) utilizing probabilistic outputs are the current algorithms to select confident data. Confident classification results made by the SVM classifier support the human expert to investigate unlabeled parts of the data.","PeriodicalId":248763,"journal":{"name":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Atlas - Annotation tool using partially supervised learning and multi-view co-learning in human-computer-interaction scenarios\",\"authors\":\"S. Meudt, Lutz Bigalke, F. Schwenker\",\"doi\":\"10.1109/ISSPA.2012.6310495\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we present ATLAS, a new graphical tool for annotation of multi-modal data streams. Although Atlas has been developed for data bases collected in human computer interaction (HCI) scenarios, it is applicable for multimodal time series in general settings. In our HCI scenario, besides multi-channel audio and video inputs, various bio-physiological data has been recorded, e.g. complex multi-variate signals such as ECG, EEG, EMG as well as simple uni-variate skin conductivity, respiration, blood volume pulse, etc. All these different types of data can be processed through ATLAS. In addition to processing raw data, intermediate data processing results, such as extracted features, and even (probabilistic or crisp) outputs of pre-trained classifier modules can be displayed. Furthermore, annotation and transcription tools have been implemented. ATLAS's basic structure is briefly described. Besides these basic annotation features, active learning (active data selection) approaches have been included into the overall system. Support Vector Machines (SVM) utilizing probabilistic outputs are the current algorithms to select confident data. Confident classification results made by the SVM classifier support the human expert to investigate unlabeled parts of the data.\",\"PeriodicalId\":248763,\"journal\":{\"name\":\"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSPA.2012.6310495\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSPA.2012.6310495","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Atlas - Annotation tool using partially supervised learning and multi-view co-learning in human-computer-interaction scenarios
In this paper we present ATLAS, a new graphical tool for annotation of multi-modal data streams. Although Atlas has been developed for data bases collected in human computer interaction (HCI) scenarios, it is applicable for multimodal time series in general settings. In our HCI scenario, besides multi-channel audio and video inputs, various bio-physiological data has been recorded, e.g. complex multi-variate signals such as ECG, EEG, EMG as well as simple uni-variate skin conductivity, respiration, blood volume pulse, etc. All these different types of data can be processed through ATLAS. In addition to processing raw data, intermediate data processing results, such as extracted features, and even (probabilistic or crisp) outputs of pre-trained classifier modules can be displayed. Furthermore, annotation and transcription tools have been implemented. ATLAS's basic structure is briefly described. Besides these basic annotation features, active learning (active data selection) approaches have been included into the overall system. Support Vector Machines (SVM) utilizing probabilistic outputs are the current algorithms to select confident data. Confident classification results made by the SVM classifier support the human expert to investigate unlabeled parts of the data.