Lan Zhang, Daren Zheng, Zhengtao Wu, Mengjing Liu, Mu Yuan, Feng Han, Xiangyang Li
{"title":"海报:在多模态传感数据中交叉标记和学习未知活动","authors":"Lan Zhang, Daren Zheng, Zhengtao Wu, Mengjing Liu, Mu Yuan, Feng Han, Xiangyang Li","doi":"10.1145/3300061.3343407","DOIUrl":null,"url":null,"abstract":"One of the major challenges for fully enjoying the power of machine learning is the need for the high-quality labelled data. To tap-in the gold-mine of data generated by IoT devices with unprecedented volume and value, we discover and leverage the hidden connections among the multimodal data collected by various sensing devices. Different modal data can complete and learn from each other, but it is challenging to fuse multimodal data without knowing their perception (and thus the correct labels). In this work, we propose MultiSense, a paradigm for automatically mining potential perception, cross-labelling each modal data, and then improving the learning models over the set of multimodal data. We design innovative solutions for segmenting, aligning, and fusing multimodal data from different sensors. We implement our framework and conduct comprehensive evaluations on a rich set of data. Our results demonstrate that MultiSense significantly improves the data usability and the power of the learning models.","PeriodicalId":223523,"journal":{"name":"The 25th Annual International Conference on Mobile Computing and Networking","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Poster: Cross Labelling and Learning Unknown Activities Among Multimodal Sensing Data\",\"authors\":\"Lan Zhang, Daren Zheng, Zhengtao Wu, Mengjing Liu, Mu Yuan, Feng Han, Xiangyang Li\",\"doi\":\"10.1145/3300061.3343407\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of the major challenges for fully enjoying the power of machine learning is the need for the high-quality labelled data. To tap-in the gold-mine of data generated by IoT devices with unprecedented volume and value, we discover and leverage the hidden connections among the multimodal data collected by various sensing devices. Different modal data can complete and learn from each other, but it is challenging to fuse multimodal data without knowing their perception (and thus the correct labels). In this work, we propose MultiSense, a paradigm for automatically mining potential perception, cross-labelling each modal data, and then improving the learning models over the set of multimodal data. We design innovative solutions for segmenting, aligning, and fusing multimodal data from different sensors. We implement our framework and conduct comprehensive evaluations on a rich set of data. Our results demonstrate that MultiSense significantly improves the data usability and the power of the learning models.\",\"PeriodicalId\":223523,\"journal\":{\"name\":\"The 25th Annual International Conference on Mobile Computing and Networking\",\"volume\":\"110 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 25th Annual International Conference on Mobile Computing and Networking\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3300061.3343407\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 25th Annual International Conference on Mobile Computing and Networking","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3300061.3343407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Poster: Cross Labelling and Learning Unknown Activities Among Multimodal Sensing Data
One of the major challenges for fully enjoying the power of machine learning is the need for the high-quality labelled data. To tap-in the gold-mine of data generated by IoT devices with unprecedented volume and value, we discover and leverage the hidden connections among the multimodal data collected by various sensing devices. Different modal data can complete and learn from each other, but it is challenging to fuse multimodal data without knowing their perception (and thus the correct labels). In this work, we propose MultiSense, a paradigm for automatically mining potential perception, cross-labelling each modal data, and then improving the learning models over the set of multimodal data. We design innovative solutions for segmenting, aligning, and fusing multimodal data from different sensors. We implement our framework and conduct comprehensive evaluations on a rich set of data. Our results demonstrate that MultiSense significantly improves the data usability and the power of the learning models.