{"title":"FedUSL:基于多模态传感数据的驾驶疲劳检测联合注释方法","authors":"Songcan Yu, Qinglin Yang, Junbo Wang, Celimuge Wu","doi":"10.1145/3657291","DOIUrl":null,"url":null,"abstract":"<p>Single-modal data has a limitation on fatigue detection, while the shortage of labeled data is pervasive in multimodal sensing data. Besides, it is a time-consuming task for board-certified experts to manually annotate the physiological signals, especially hard for EEG sensor data. To solve this problem, we propose FedUSL (Federated Unified Space Learning), a federated annotation method for multimodal sensing data in the driving fatigue detection scenario, which has the innate ability to exploit more than four multimodal data simultaneously for correlations and complementary with low complexity. To validate the efficiency of the proposed method, we first collect the multimodal data (aka, camera, physiological sensor) through simulated fatigue driving. The data is then preprocessed and features are extracted to form a usable multimodal dataset. Based on the dataset, we analyze the performance of the proposed method. The experimental results demonstrate that FedUSL outperforms other approaches for driver fatigue detection with carefully selected modal combinations, especially when a modality contains only \\(10\\% \\) labeled data.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"27 1","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedUSL: A Federated Annotation Method for Driving Fatigue Detection based on Multimodal Sensing Data\",\"authors\":\"Songcan Yu, Qinglin Yang, Junbo Wang, Celimuge Wu\",\"doi\":\"10.1145/3657291\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Single-modal data has a limitation on fatigue detection, while the shortage of labeled data is pervasive in multimodal sensing data. Besides, it is a time-consuming task for board-certified experts to manually annotate the physiological signals, especially hard for EEG sensor data. To solve this problem, we propose FedUSL (Federated Unified Space Learning), a federated annotation method for multimodal sensing data in the driving fatigue detection scenario, which has the innate ability to exploit more than four multimodal data simultaneously for correlations and complementary with low complexity. To validate the efficiency of the proposed method, we first collect the multimodal data (aka, camera, physiological sensor) through simulated fatigue driving. The data is then preprocessed and features are extracted to form a usable multimodal dataset. Based on the dataset, we analyze the performance of the proposed method. The experimental results demonstrate that FedUSL outperforms other approaches for driver fatigue detection with carefully selected modal combinations, especially when a modality contains only \\\\(10\\\\% \\\\) labeled data.</p>\",\"PeriodicalId\":50910,\"journal\":{\"name\":\"ACM Transactions on Sensor Networks\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2024-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Sensor Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3657291\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Sensor Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3657291","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
FedUSL: A Federated Annotation Method for Driving Fatigue Detection based on Multimodal Sensing Data
Single-modal data has a limitation on fatigue detection, while the shortage of labeled data is pervasive in multimodal sensing data. Besides, it is a time-consuming task for board-certified experts to manually annotate the physiological signals, especially hard for EEG sensor data. To solve this problem, we propose FedUSL (Federated Unified Space Learning), a federated annotation method for multimodal sensing data in the driving fatigue detection scenario, which has the innate ability to exploit more than four multimodal data simultaneously for correlations and complementary with low complexity. To validate the efficiency of the proposed method, we first collect the multimodal data (aka, camera, physiological sensor) through simulated fatigue driving. The data is then preprocessed and features are extracted to form a usable multimodal dataset. Based on the dataset, we analyze the performance of the proposed method. The experimental results demonstrate that FedUSL outperforms other approaches for driver fatigue detection with carefully selected modal combinations, especially when a modality contains only \(10\% \) labeled data.
期刊介绍:
ACM Transactions on Sensor Networks (TOSN) is a central publication by the ACM in the interdisciplinary area of sensor networks spanning a broad discipline from signal processing, networking and protocols, embedded systems, information management, to distributed algorithms. It covers research contributions that introduce new concepts, techniques, analyses, or architectures, as well as applied contributions that report on development of new tools and systems or experiences and experiments with high-impact, innovative applications. The Transactions places special attention on contributions to systemic approaches to sensor networks as well as fundamental contributions.