Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu
{"title":"失物招领处!:将摄像头监控录像中的目标人物与智能手机识别码相关联","authors":"Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu","doi":"10.1145/3458864.3466904","DOIUrl":null,"url":null,"abstract":"We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network.","PeriodicalId":153361,"journal":{"name":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Lost and Found!: associating target persons in camera surveillance footage with smartphone identifiers\",\"authors\":\"Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, M. Gruteser, Shubham Jain, Kristin J. Dana, A. Ashok, Bin Cheng, Hongsheng Lu\",\"doi\":\"10.1145/3458864.3466904\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network.\",\"PeriodicalId\":153361,\"journal\":{\"name\":\"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3458864.3466904\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3458864.3466904","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Lost and Found!: associating target persons in camera surveillance footage with smartphone identifiers
We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network.