Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation最新文献

筛选
英文 中文
One Ring to Rule Them All: An Open Source Smartring Platform for Finger Motion Analytics and Healthcare Applications 一个戒指统治他们所有:一个开源的智能平台,用于手指运动分析和医疗保健应用
Hao Zhou, Taiting Lu, Yilin Liu, Shijia Zhang, Runze Liu, Mahanth K. Gowda
{"title":"One Ring to Rule Them All: An Open Source Smartring Platform for Finger Motion Analytics and Healthcare Applications","authors":"Hao Zhou, Taiting Lu, Yilin Liu, Shijia Zhang, Runze Liu, Mahanth K. Gowda","doi":"10.1145/3576842.3582382","DOIUrl":"https://doi.org/10.1145/3576842.3582382","url":null,"abstract":"This paper presents OmniRing, an open-source smartring platform with IMU and PPG sensors for activity tracking and health analytics applications. Smartring platforms are on the rise because of comfortable wearing, with the market size expected to reach $92 million soon. Nevertheless, most existing platforms are either commercial and proprietary without details of software/hardware or use suboptimal PCB design resulting in bulky form factors, inconvenient for wearing in daily life. Towards bridging the gap, OmniRing presents an extensible design of a smartring with a miniature form factor, longer battery life, wireless communication, and water resistance so that users can wear it all the time. Towards this end, OmniRing exploits opportunities in SoC, and carefully integrates the sensing units with a microcontroller and BLE modules. The electronic components are integrated on both sides of a flexible PCB that is bent in the shape of a ring and enclosed in a flexible and waterproof case for smooth skin contact. The overall cost is under $25, with weight of 2.5g, and up to a week of battery life. Extensive usability surveys validate the comfort levels. To validate the sensing capabilities, we enable an application in 3D finger motion tracking. By extracting synthetic training data from public videos coupled with data augmentation to minimize the overhead of training data generation for a new platform, OmniRing designs a transformer-based model that exploits correlations across fingers and time to track 3D finger motion with an accuracy of 6.57mm. We also validate the use of PPG data from OmniRing for heart rate monitoring. We believe the platform can enable exciting applications in fitness tracking, metaverse, sports, and healthcare.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129736770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Poster Abstract: Learning-based Sensor Scheduling for Event Classification on Embedded Edge Devices 摘要:基于学习的嵌入式边缘设备事件分类传感器调度
Abdulrahman Bukhari, Hyoseung Kim
{"title":"Poster Abstract: Learning-based Sensor Scheduling for Event Classification on Embedded Edge Devices","authors":"Abdulrahman Bukhari, Hyoseung Kim","doi":"10.1145/3576842.3589176","DOIUrl":"https://doi.org/10.1145/3576842.3589176","url":null,"abstract":"Incremental learning on embedded edge devices is feasible nowadays due to the increasing computational power of these devices and the reduction techniques applied to simplify the model. However, edge devices still require significant time to update the learning model and such time is hard to be obtained due to other tasks, such as sensor data pulling, data preprocessing, and classification. In order to secure the time for incremental learning and to reduce energy consumption, we need to schedule sensing activities without missing any events in the environment. In this paper, we propose a reinforcement learning-based sensor scheduler that dynamically determines the sensing interval for each classification moment by learning the patterns of event classes. The initial results are promising compared to the existing scheduling approach.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Demo Abstract: In-Ear-Voice - Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms 摘要:入耳式语音——用于入耳式传感平台的骨传导麦克风的毫瓦音频增强
Philipp Schilk, Niccolò Polvani, Andrea Ronco, M. Cernak, M. Magno
{"title":"Demo Abstract: In-Ear-Voice - Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms","authors":"Philipp Schilk, Niccolò Polvani, Andrea Ronco, M. Cernak, M. Magno","doi":"10.1145/3576842.3589166","DOIUrl":"https://doi.org/10.1145/3576842.3589166","url":null,"abstract":"This demonstration presents a custom-developed research platform for low-power wireless earbuds based on the cutting-edge Ambiq Apollo 4 Blue SoC, and targeted at applications in in-ear sensing and on-the-edge data processing. The earbud shown is currently equipped with a novel, commercial MEMS bone-conduction microphone. Such microphones can record the wearer’s speech with much greater isolation, enabling personalized voice activity detection and further audio enhancement applications. The device is running a specialized, TinyML-based, voice activity detection algorithm, indicating the wearer’s speech using an onboard LED. A second identical earbud attempts to do the same detection using a traditional air-conduction microphone is also shown to underline the advantage the bone-conduction microphone provides. Overall the platform achieves 2.64mW average power consumption at 14uJ per inference, reaching 43h of battery life on a miniature 32mAh li-ion cell without duty cycling.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123097137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MetaMorphosis: Task-oriented Privacy Cognizant Feature Generation for Multi-task Learning 多任务学习中面向任务的隐私认知特征生成
Md. Adnan Arefeen, Zhouyu Li, M. Y. S. Uddin, Anupam Das
{"title":"MetaMorphosis: Task-oriented Privacy Cognizant Feature Generation for Multi-task Learning","authors":"Md. Adnan Arefeen, Zhouyu Li, M. Y. S. Uddin, Anupam Das","doi":"10.1145/3576842.3582372","DOIUrl":"https://doi.org/10.1145/3576842.3582372","url":null,"abstract":"With the growth of computer vision applications, deep learning, and edge computing contribute to ensuring practical collaborative intelligence (CI) by distributing the workload among edge devices and the cloud. However, running separate single-task models on edge devices is inefficient regarding the required computational resource and time. In this context, multi-task learning allows leveraging a single deep learning model for performing multiple tasks, such as semantic segmentation and depth estimation on incoming video frames. This single processing pipeline generates common deep features that are shared among multi-task modules. However, in a collaborative intelligence scenario, generating common deep features has two major issues. First, the deep features may inadvertently contain input information exposed to the downstream modules (violating input privacy). Second, the generated universal features expose a piece of collective information than what is intended for a certain task, in which features for one task can be utilized to perform another task (violating task privacy). This paper proposes a novel deep learning-based privacy-cognizant feature generation process called “MetaMorphosis” that limits inference capability to specific tasks at hand. To achieve this, we propose a channel squeeze-excitation based feature metamorphosis module, Cross-SEC, to achieve distinct attention of all tasks and a de-correlation loss function with differential-privacy to train a deep learning model that produces distinct privacy-aware features as an output for the respective tasks. With extensive experimentation on four datasets consisting of diverse images related to scene understanding and facial attributes, we show that MetaMorphosis outperforms recent adversarial learning and universal feature generation methods by guaranteeing privacy requirements in an efficient way for image and video analytics.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126598022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poster Abstract: Camera-Assisted Training of Non-Vision Sensors for Anomaly Detection 摘要:相机辅助训练的非视觉传感器异常检测
Norah Albazzai, Omer F. Rana, Charith Perera
{"title":"Poster Abstract: Camera-Assisted Training of Non-Vision Sensors for Anomaly Detection","authors":"Norah Albazzai, Omer F. Rana, Charith Perera","doi":"10.1145/3576842.3589164","DOIUrl":"https://doi.org/10.1145/3576842.3589164","url":null,"abstract":"Cameras are becoming pervasive and used for image classification and object detection in various applications, including anomaly detection. However, cameras pose a privacy threat and require significant power resources. To address these issues, researchers have explored non-vision sensors, but pre-training them for anomaly detection is challenging because anomalies are difficult to define and vary significantly across indoor environments. Thus, we propose a new approach to training non-vision sensors using a tiny camera and a pre-trained MobileNetV2 model. Data from non-vision sensors are labelled based on the image classification from the tiny camera, and an anomaly detection model is trained using these labelled data. The Random Forest model is used as the final model, achieving an accuracy of 95.58%.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114258872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acuity: Creating Realistic Digital Twins Through Multi-resolution Pointcloud Processing and Audiovisual Sensor Fusion 敏锐度:通过多分辨率点云处理和视听传感器融合创建逼真的数字孪生
Jason Wu, Ziqi Wang, Ankur Sarker, M. Srivastava
{"title":"Acuity: Creating Realistic Digital Twins Through Multi-resolution Pointcloud Processing and Audiovisual Sensor Fusion","authors":"Jason Wu, Ziqi Wang, Ankur Sarker, M. Srivastava","doi":"10.1145/3576842.3582363","DOIUrl":"https://doi.org/10.1145/3576842.3582363","url":null,"abstract":"As augmented and virtual reality (AR/VR) technology matures, a method is desired to represent real-world persons visually and aurally in a virtual scene with high fidelity to craft an immersive and realistic user experience. Current technologies leverage camera and depth sensors to render visual representations of subjects through avatars, and microphone arrays are employed to localize and separate high-quality subject audio through beamforming. However, challenges remain in both realms. In the visual domain, avatars can only map key features (e.g., pose, expression) to a predetermined model, rendering them incapable of capturing the subjects’ full details. Alternatively, high-resolution point clouds can be utilized to represent human subjects. However, such three-dimensional data is computationally expensive to process. In the realm of audio, sound source separation requires prior knowledge of the subjects’ locations. However, it may take unacceptably long for sound source localization algorithms to provide this knowledge, which can still be error-prone, especially with moving objects. These challenges make it difficult for AR systems to produce real-time, high-fidelity representations of human subjects for applications such as AR/VR conferencing that mandate negligible system latency. We present Acuity, a real-time system capable of creating high-fidelity representations of human subjects in a virtual scene both visually and aurally. Acuity isolates subjects from high-resolution input point clouds. It reduces the processing overhead by performing background subtraction at a coarse resolution, then applying the detected bounding boxes to fine-grained point clouds. Meanwhile, Acuity leverages an audiovisual sensor fusion approach to expedite sound source separation. The estimated object location in the visual domain guides the acoustic pipeline to isolate the subjects’ voices without running sound source localization. Our results demonstrate that Acuity can isolate multiple subjects’ high-quality point clouds with a maximum latency of 70 ms and average throughput of over 25 fps, while separating audio in less than 30 ms. We provide the source code of Acuity at: https://github.com/nesl/Acuity.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131418616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An IoT Dashboard Monitoring First Responders’ Health and Environmental Data during HAZMAT Emergencies 物联网仪表板在危险物质紧急情况下监测急救人员的健康和环境数据
Ann L. Fruhling, A. Yoder, Soundarya Jonnalagadda
{"title":"An IoT Dashboard Monitoring First Responders’ Health and Environmental Data during HAZMAT Emergencies","authors":"Ann L. Fruhling, A. Yoder, Soundarya Jonnalagadda","doi":"10.1145/3576842.3589156","DOIUrl":"https://doi.org/10.1145/3576842.3589156","url":null,"abstract":"The goal of the REaCH (Real Time Emergency Communication System for HAZMAT Incidents) project was to develop a real-time interactive dashboard that captures and displays first responders' health data and environmental exposure information to an incident commander during a HAZMAT (Hazardous Materials) incident. The REaCH application monitors individual health parameters and helps Incident Commanders evaluate if an individual needs to be removed from the scene due to their health status being compromised. The REaCH application utilizes the latest IoT technology integrating multiple IoT sensors that capture human and environmental data, using a real-time communication network. Our demo presents the key features and design practices of our IoT dashboard. Practitioners and researchers will find these insights useful when developing similar IoT dashboard applications for emergency management.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"212 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133524345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-Ear-Voice: Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms 入耳式语音:用于入耳式传感平台的骨传导麦克风的毫瓦音频增强
Philipp Schilk, Niccolò Polvani, Andrea Ronco, M. Cernak, M. Magno
{"title":"In-Ear-Voice: Towards Milli-Watt Audio Enhancement With Bone-Conduction Microphones for In-Ear Sensing Platforms","authors":"Philipp Schilk, Niccolò Polvani, Andrea Ronco, M. Cernak, M. Magno","doi":"10.1145/3576842.3582365","DOIUrl":"https://doi.org/10.1145/3576842.3582365","url":null,"abstract":"The recent ubiquitous adoption of remote conferencing has been accompanied by omnipresent frustration with distorted or otherwise unclear voice communication. Audio enhancement can compensate for low-quality input signals from, for example, small true wireless earbuds, by applying noise suppression techniques. Such processing relies on voice activity detection (VAD) with low latency and the added capability of discriminating the wearer’s voice from others - a task of significant computational complexity. The tight energy budget of devices as small as modern earphones, however, requires any system attempting to tackle this problem to do so with minimal power and processing overhead, while not relying on speaker-specific voice samples and training due to usability concerns. This paper presents the design and implementation of a custom research platform for low-power wireless earbuds based on novel, commercial, MEMS bone-conduction microphones. Such microphones can record the wearer’s speech with much greater isolation, enabling personalized voice activity detection and further audio enhancement applications. Furthermore, the paper accurately evaluates a proposed low-power personalized speech detection algorithm based on bone conduction data and a recurrent neural network running on the implemented research platform. This algorithm is compared to an approach based on traditional microphone input. The performance of the bone conduction system, achieving detection of speech within 12.8ms at an accuracy of 95% is evaluated. Different SoC choices are contrasted, with the final implementation based on the cutting-edge Ambiq Apollo 4 Blue SoC achieving 2.64mW average power consumption at 14uJ per inference, reaching 43h of battery life on a miniature 32mAh li-ion cell and without duty cycling.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133645881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Demo Abstract: DOME – IoT-Based Monitoring Emergent Events for Wildland Fire Resilience 摘要:基于DOME - iot的野外火灾应急事件监测
Fangqi Liu, Tung-Chun Chang, Kevin Liu, Cyrus Li, Sachin Krishan Thyaharajan, Sideshwar Jappeswaran Balasubramanian, N. Venkatasubramanian
{"title":"Demo Abstract: DOME – IoT-Based Monitoring Emergent Events for Wildland Fire Resilience","authors":"Fangqi Liu, Tung-Chun Chang, Kevin Liu, Cyrus Li, Sachin Krishan Thyaharajan, Sideshwar Jappeswaran Balasubramanian, N. Venkatasubramanian","doi":"10.1145/3576842.3589180","DOIUrl":"https://doi.org/10.1145/3576842.3589180","url":null,"abstract":"We present DOME, an IoT monitoring system that employs mobile drones and in-situ IoT devices to gather real-time data for situational awareness during emergent and evolving events, with a focus on wildland fires. DOME integrates, and processes all collected sensing data and presents a dashboard that displays the dynamic status of various features, including fire, weather, and air quality. Based on the perceived fire status and wind conditions, DOME leverages physics-based fire models to predict the future evolution of the fire. Moreover, DOME integrates algorithms that plan the flight of multiple drones and control their motions to support automatic drone-based mobile sensing. This feature enables efficient data collection and enhances the system’s overall monitoring capabilities.","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130306587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Poster Abstract: IoT-based Child Safety Alert System 摘要:基于物联网的儿童安全警报系统
Cody Beck, M. Hammoud, Dawson Kirkland, Spencer Kleparek, S. Tasnim
{"title":"Poster Abstract: IoT-based Child Safety Alert System","authors":"Cody Beck, M. Hammoud, Dawson Kirkland, Spencer Kleparek, S. Tasnim","doi":"10.1145/3576842.3589159","DOIUrl":"https://doi.org/10.1145/3576842.3589159","url":null,"abstract":"According to the National Highway Traffic Safety Administration 938 children have died due to heatstroke in vehicles since 1998, with a record high of 53 deaths in 2019. A majority of child deaths via being left in a hot car is largely due to simply being forgotten. In order to tackle this problem, we have designed a system that alerts the driver reminding them of the child still present in the car. Our designed system alerts the parents if they move too far away from the vehicle and the child is still in the car, or if the temperature reaches a dangerous level (e.g., too hot or too cold).","PeriodicalId":266438,"journal":{"name":"Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114287714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信