带有多模态传感器的人员重新识别测试平台

Guangliang Zhao, Guy Ben-Yosef, Jianwei Qiu, Yang Zhao, Prabhu Janakaraj, S. Boppana, A. R. Schnore
{"title":"带有多模态传感器的人员重新识别测试平台","authors":"Guangliang Zhao, Guy Ben-Yosef, Jianwei Qiu, Yang Zhao, Prabhu Janakaraj, S. Boppana, A. R. Schnore","doi":"10.1145/3485730.3494113","DOIUrl":null,"url":null,"abstract":"Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.","PeriodicalId":356322,"journal":{"name":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Person Re-ID Testbed with Multi-Modal Sensors\",\"authors\":\"Guangliang Zhao, Guy Ben-Yosef, Jianwei Qiu, Yang Zhao, Prabhu Janakaraj, S. Boppana, A. R. Schnore\",\"doi\":\"10.1145/3485730.3494113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.\",\"PeriodicalId\":356322,\"journal\":{\"name\":\"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3485730.3494113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3485730.3494113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

由于安全、智能系统和其他应用的需求,个人身份重新识别是一个具有挑战性的问题,越来越受到人们的关注。大多数人重新识别工作都是基于视觉的,比如图像、视频,或者广义上说,基于人脸识别的技术。最近,RGB+IR、RGB+text、RGB+WiFi等多个多模态人物身份识别数据集相继发布,显示了基于多模态传感器的人物身份识别方法的潜力。然而,在公共数据集中存在一些常见的问题,如时间持续时间短、缺乏外观变化和有限的活动,导致模型不鲁棒。例如,基于视觉的Re-ID模型对外观变化很敏感。在这项工作中,创建了一个带有多模态传感器的人Re-ID测试平台,允许收集包括RGB, IR,深度,WiFi,雷达和音频在内的传感模式。这个新颖的数据集将涵盖多季节、大时间跨度的日常办公活动。基于该试验台收集的小数据集,获得了评估不同人Re-ID模型的初步分析结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Person Re-ID Testbed with Multi-Modal Sensors
Person Re-ID is a challenging problem and is gaining more attention due to demands in security, intelligent system and other applications. Most person Re-ID works are vision-based, such as image, video, or broadly speaking, face recognition-based techniques. Recently, several multi-modal person Re-ID datasets were released, including RGB+IR, RGB+text, RGB+WiFi, which shows the potential of the multi-modal sensor-based person Re-ID approach. However, there are several common issues in public datasets, such as short time duration, lack of appearance change, and limited activities, resulting in un-robust models. For example, vision-based Re-ID models are sensitive to appearance change. In this work, a person Re-ID testbed with multi-modal sensors is created, allowing the collection of sensing modalities including RGB, IR, depth, WiFi, radar, and audio. This novel dataset will cover normal daily office activities with large time span over multi-seasons. Initial analytic results are obtained for evaluating different person Re-ID models, based on small datasets collected in this testbed.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信