{"title":"基于在线文本相关信息提取的自动语义模型研究","authors":"L. Krupp, Agnes Grünerbl, G. Bahle, P. Lukowicz","doi":"10.1109/SMARTCOMP.2019.00094","DOIUrl":null,"url":null,"abstract":"Monitoring of human activities is an essential capability of many smart systems. In recent years much progress has been achieved. One of the key remaining challenges is the availability of labeled training data, in particular taking into account the degree of variability in human activities. A possible solution is to leverage large scale online data repositories. This has been previously attempted with image and sound data, as both microphones and cameras are widely used sensing modalities. In this paper, we describe a first step towards the use of online, text-based activity descriptions to support general sensor-based activity recognition systems. The idea is to extract semantic information from online texts about the way complex activities are composed of simple ones that have to be performed (e.g. a manual for assembling a furniture piece) and use such a semantic description in conjunction with sensor based, statistical classifiers of basic actions to recognize the complex activities and compose them into semantic trees. Extraction of domain relevant information evaluated in 11 different text-based manuals from different domains reached an average recall of 77%, and precision of 88%. Actual structural error-rate in the construction of respective trees was around 1%.","PeriodicalId":253364,"journal":{"name":"2019 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Towards Automatic Semantic Models by Extraction of Relevant Information from Online Text\",\"authors\":\"L. Krupp, Agnes Grünerbl, G. Bahle, P. Lukowicz\",\"doi\":\"10.1109/SMARTCOMP.2019.00094\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Monitoring of human activities is an essential capability of many smart systems. In recent years much progress has been achieved. One of the key remaining challenges is the availability of labeled training data, in particular taking into account the degree of variability in human activities. A possible solution is to leverage large scale online data repositories. This has been previously attempted with image and sound data, as both microphones and cameras are widely used sensing modalities. In this paper, we describe a first step towards the use of online, text-based activity descriptions to support general sensor-based activity recognition systems. The idea is to extract semantic information from online texts about the way complex activities are composed of simple ones that have to be performed (e.g. a manual for assembling a furniture piece) and use such a semantic description in conjunction with sensor based, statistical classifiers of basic actions to recognize the complex activities and compose them into semantic trees. Extraction of domain relevant information evaluated in 11 different text-based manuals from different domains reached an average recall of 77%, and precision of 88%. Actual structural error-rate in the construction of respective trees was around 1%.\",\"PeriodicalId\":253364,\"journal\":{\"name\":\"2019 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SMARTCOMP.2019.00094\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP.2019.00094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Automatic Semantic Models by Extraction of Relevant Information from Online Text
Monitoring of human activities is an essential capability of many smart systems. In recent years much progress has been achieved. One of the key remaining challenges is the availability of labeled training data, in particular taking into account the degree of variability in human activities. A possible solution is to leverage large scale online data repositories. This has been previously attempted with image and sound data, as both microphones and cameras are widely used sensing modalities. In this paper, we describe a first step towards the use of online, text-based activity descriptions to support general sensor-based activity recognition systems. The idea is to extract semantic information from online texts about the way complex activities are composed of simple ones that have to be performed (e.g. a manual for assembling a furniture piece) and use such a semantic description in conjunction with sensor based, statistical classifiers of basic actions to recognize the complex activities and compose them into semantic trees. Extraction of domain relevant information evaluated in 11 different text-based manuals from different domains reached an average recall of 77%, and precision of 88%. Actual structural error-rate in the construction of respective trees was around 1%.