Adeola Bannis, Shijia Pan, Carlos Ruiz, John Shen, Hae Young Noh, Pei Zhang
{"title":"白痴:人类携带的可穿戴设备的普遍识别和分配的多模态框架","authors":"Adeola Bannis, Shijia Pan, Carlos Ruiz, John Shen, Hae Young Noh, Pei Zhang","doi":"10.1145/3579832","DOIUrl":null,"url":null,"abstract":"IoT (Internet of Things) devices, such as network-enabled wearables, are carried by increasingly more people throughout daily life. Information from multiple devices can be aggregated to gain insights into a person’s behavior or status. For example, an elderly care facility could monitor patients for falls by combining fitness bracelet data with video of the entire class. For this aggregated data to be useful to each person, we need a multi-modality association of the devices’ physical ID (i.e., location, the user holding it, visual appearance) with a virtual ID (e.g., IP address/available services). Existing approaches for multi-modality association often require intentional interaction or direct line-of-sight to the device, which is infeasible for a large number of users or when the device is obscured by clothing. We present IDIoT, a calibration-free passive sensing approach that fuses motion sensor information with camera footage of an area to estimate the body location of motion sensors carried by a user. We characterize results across three baselines to highlight how different fusing methodology results better than earlier IMU-vision fusion algorithms. From this characterization, we determine IDIoT is more robust to errors such as missing frames or miscalibration that frequently occur in IMU-vision matching systems.","PeriodicalId":29764,"journal":{"name":"ACM Transactions on Internet of Things","volume":"15 1","pages":"1 - 25"},"PeriodicalIF":3.5000,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"IDIoT: Multimodal Framework for Ubiquitous Identification and Assignment of Human-carried Wearable Devices\",\"authors\":\"Adeola Bannis, Shijia Pan, Carlos Ruiz, John Shen, Hae Young Noh, Pei Zhang\",\"doi\":\"10.1145/3579832\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"IoT (Internet of Things) devices, such as network-enabled wearables, are carried by increasingly more people throughout daily life. Information from multiple devices can be aggregated to gain insights into a person’s behavior or status. For example, an elderly care facility could monitor patients for falls by combining fitness bracelet data with video of the entire class. For this aggregated data to be useful to each person, we need a multi-modality association of the devices’ physical ID (i.e., location, the user holding it, visual appearance) with a virtual ID (e.g., IP address/available services). Existing approaches for multi-modality association often require intentional interaction or direct line-of-sight to the device, which is infeasible for a large number of users or when the device is obscured by clothing. We present IDIoT, a calibration-free passive sensing approach that fuses motion sensor information with camera footage of an area to estimate the body location of motion sensors carried by a user. We characterize results across three baselines to highlight how different fusing methodology results better than earlier IMU-vision fusion algorithms. From this characterization, we determine IDIoT is more robust to errors such as missing frames or miscalibration that frequently occur in IMU-vision matching systems.\",\"PeriodicalId\":29764,\"journal\":{\"name\":\"ACM Transactions on Internet of Things\",\"volume\":\"15 1\",\"pages\":\"1 - 25\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2023-01-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Internet of Things\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3579832\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Internet of Things","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579832","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 4
摘要
物联网(Internet of Things)设备,如支持网络的可穿戴设备,越来越多的人在日常生活中携带。来自多个设备的信息可以聚合起来,以了解一个人的行为或状态。例如,一家老年护理机构可以通过将健身手环数据与整个课程的视频相结合来监测患者的跌倒情况。为了使这些聚合数据对每个人都有用,我们需要将设备的物理ID(例如,位置,持有它的用户,视觉外观)与虚拟ID(例如,IP地址/可用服务)进行多模态关联。现有的多模态关联方法通常需要有意的交互或设备的直接视线,这对于大量用户或设备被衣服遮挡时是不可行的。我们提出了一种无需校准的被动传感方法IDIoT,它将运动传感器信息与一个区域的摄像机镜头融合在一起,以估计用户携带的运动传感器的身体位置。我们描述了三个基线的结果,以突出不同的融合方法如何比早期的imu -视觉融合算法效果更好。根据这一特性,我们确定IDIoT对imu -视觉匹配系统中经常出现的缺失帧或校准错误等错误具有更强的鲁棒性。
IDIoT: Multimodal Framework for Ubiquitous Identification and Assignment of Human-carried Wearable Devices
IoT (Internet of Things) devices, such as network-enabled wearables, are carried by increasingly more people throughout daily life. Information from multiple devices can be aggregated to gain insights into a person’s behavior or status. For example, an elderly care facility could monitor patients for falls by combining fitness bracelet data with video of the entire class. For this aggregated data to be useful to each person, we need a multi-modality association of the devices’ physical ID (i.e., location, the user holding it, visual appearance) with a virtual ID (e.g., IP address/available services). Existing approaches for multi-modality association often require intentional interaction or direct line-of-sight to the device, which is infeasible for a large number of users or when the device is obscured by clothing. We present IDIoT, a calibration-free passive sensing approach that fuses motion sensor information with camera footage of an area to estimate the body location of motion sensors carried by a user. We characterize results across three baselines to highlight how different fusing methodology results better than earlier IMU-vision fusion algorithms. From this characterization, we determine IDIoT is more robust to errors such as missing frames or miscalibration that frequently occur in IMU-vision matching systems.