Lei Xie, Jianqiang Sun, Qingliang Cai, Chuyu Wang, Jie Wu, Sanglu Lu
{"title":"Tell me what i see: recognize RFID tagged objects in augmented reality systems","authors":"Lei Xie, Jianqiang Sun, Qingliang Cai, Chuyu Wang, Jie Wu, Sanglu Lu","doi":"10.1145/2971648.2971661","DOIUrl":null,"url":null,"abstract":"Nowadays, people usually depend on augmented reality (AR) systems to obtain an augmented view in a real-world environment. With the help of advanced AR technology (e.g. object recognition), users can effectively distinguish multiple objects of different types. However, these techniques can only offer limited degrees of distinctions among different objects and cannot provide more inherent information about these objects. In this paper, we leverage RFID technology to further label different objects with RFID tags. We deploy additional RFID antennas to the COTS depth camera and propose a continuous scanning-based scheme to scan the objects, i.e., the system continuously rotates and samples the depth of field and RF-signals from these tagged objects. In this way, by pairing the tags with the objects according to the correlations between the depth of field and RF-signals, we can accurately identify and distinguish multiple tagged objects to realize the vision of \"tell me what I see\" from the augmented reality system. For example, in front of multiple unknown people wearing RFID tagged badges in public events, our system can identify these people and further show their inherent information from the RFID tags, such as their names, jobs, titles, etc. We have implemented a prototype system to evaluate the actual performance. The experiment results show that our solution achieves an average match ratio of 91% in distinguishing up to dozens of tagged objects with a high deployment density.","PeriodicalId":303792,"journal":{"name":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"46","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2971648.2971661","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 46
Abstract
Nowadays, people usually depend on augmented reality (AR) systems to obtain an augmented view in a real-world environment. With the help of advanced AR technology (e.g. object recognition), users can effectively distinguish multiple objects of different types. However, these techniques can only offer limited degrees of distinctions among different objects and cannot provide more inherent information about these objects. In this paper, we leverage RFID technology to further label different objects with RFID tags. We deploy additional RFID antennas to the COTS depth camera and propose a continuous scanning-based scheme to scan the objects, i.e., the system continuously rotates and samples the depth of field and RF-signals from these tagged objects. In this way, by pairing the tags with the objects according to the correlations between the depth of field and RF-signals, we can accurately identify and distinguish multiple tagged objects to realize the vision of "tell me what I see" from the augmented reality system. For example, in front of multiple unknown people wearing RFID tagged badges in public events, our system can identify these people and further show their inherent information from the RFID tags, such as their names, jobs, titles, etc. We have implemented a prototype system to evaluate the actual performance. The experiment results show that our solution achieves an average match ratio of 91% in distinguishing up to dozens of tagged objects with a high deployment density.