J. Yamashita, Yoshiaki Takimoto, Hidetaka Koya, Haruo Oishi, T. Kumada
{"title":"使用头部和眼睛运动的深度无监督活动可视化","authors":"J. Yamashita, Yoshiaki Takimoto, Hidetaka Koya, Haruo Oishi, T. Kumada","doi":"10.1145/3379336.3381503","DOIUrl":null,"url":null,"abstract":"We propose a method of visualizing user activities based on user's head and eye movements. Since we use an unobtrusive eyewear sensor, the measurement scene is unconstrained. In addition, due to the unsupervised end-to-end deep algorithm, users can discover unanticipated activities based on the exploratory analysis of low-dimensional representation of sensor data. We also suggest the novel regularization that makes the representation person invariant.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Unsupervised Activity Visualization using Head and Eye Movements\",\"authors\":\"J. Yamashita, Yoshiaki Takimoto, Hidetaka Koya, Haruo Oishi, T. Kumada\",\"doi\":\"10.1145/3379336.3381503\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a method of visualizing user activities based on user's head and eye movements. Since we use an unobtrusive eyewear sensor, the measurement scene is unconstrained. In addition, due to the unsupervised end-to-end deep algorithm, users can discover unanticipated activities based on the exploratory analysis of low-dimensional representation of sensor data. We also suggest the novel regularization that makes the representation person invariant.\",\"PeriodicalId\":335081,\"journal\":{\"name\":\"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3379336.3381503\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3379336.3381503","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Unsupervised Activity Visualization using Head and Eye Movements
We propose a method of visualizing user activities based on user's head and eye movements. Since we use an unobtrusive eyewear sensor, the measurement scene is unconstrained. In addition, due to the unsupervised end-to-end deep algorithm, users can discover unanticipated activities based on the exploratory analysis of low-dimensional representation of sensor data. We also suggest the novel regularization that makes the representation person invariant.