IEEE Journal of Selected Areas in Sensors最新文献

筛选
英文 中文
Real-Time Interference Mitigation for Automotive Radar Sensor 汽车雷达传感器的实时干扰抑制
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-09-11 DOI: 10.1109/JSAS.2025.3609274
Yubo Wu;Alexander Li;Wenjing Lou;Y. Thomas Hou
{"title":"Real-Time Interference Mitigation for Automotive Radar Sensor","authors":"Yubo Wu;Alexander Li;Wenjing Lou;Y. Thomas Hou","doi":"10.1109/JSAS.2025.3609274","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3609274","url":null,"abstract":"Automotive radar sensor plays a crucial role in advanced driver assistance systems. As radar technology becomes increasingly common in vehicles, radar-to-radar interference poses a significant challenge, leading to a reduction in target detection performance. It is essential for an interference mitigation algorithm to effectively reduce this interference under dynamic driving conditions while adhering to strict processing time requirements. In this article, we present Soteria—a real-time interference mitigation algorithm for frequency modulated continuous wave radar systems, leveraging compressed sensing techniques. Soteria identifies interference by exploiting the sparsity of signals in the frequency-time domain, then separates the desired signal from interference using the orthogonal matching pursuit (OMP) algorithm. Additionally, Soteria utilizes the inherent correlation between input data from neighboring time frames to reduce the search space for the OMP algorithm. To further enhance processing speed, Soteria is implemented using a GPU-based parallel computing approach. Simulation results indicate that Soteria can achieve <inline-formula><tex-math>$sim$</tex-math></inline-formula>1 ms processing time, outperforming state-of-the-art methods in target detection accuracy.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"290-302"},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11159154","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepCPD: Deep Learning-Based In-Car Child Presence Detection Using WiFi DeepCPD:基于深度学习的车内儿童存在检测,使用WiFi
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-08-25 DOI: 10.1109/JSAS.2025.3602722
Sakila S. Jayaweera;Beibei Wang;Wei-Hsiang Wang;K. J. Ray Liu
{"title":"DeepCPD: Deep Learning-Based In-Car Child Presence Detection Using WiFi","authors":"Sakila S. Jayaweera;Beibei Wang;Wei-Hsiang Wang;K. J. Ray Liu","doi":"10.1109/JSAS.2025.3602722","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3602722","url":null,"abstract":"Child presence detection (CPD) is a vital technology for vehicles to prevent heat-related fatalities or injuries by detecting the presence of a child left unattended. Regulatory agencies around the world are planning to mandate CPD systems in the near future. However, existing solutions have limitations in terms of accuracy, coverage, and additional device requirements. While WiFi-based solutions can overcome the limitations, existing approaches struggle to reliably distinguish between adult and child presence, leading to frequent false alarms, and are often sensitive to environmental variations. In this article, we present <italic>DeepCPD</i>, a novel deep learning framework designed for accurate CPD in smart vehicles. <italic>DeepCPD</i> utilizes an environment-independent feature—the autocorrelation function derived from WiFi channel state information—to capture human-related signatures while mitigating environmental distortions. A Transformer-based architecture, followed by a multilayer perceptron, is employed to differentiate adults from children by modeling motion patterns and subtle body size differences. To address the limited availability of in-vehicle child and adult data, we introduce a two-stage learning strategy that significantly enhances model generalization. Extensive experiments conducted across more than 30 car models and over 500 h of data collection demonstrate that <italic>DeepCPD</i> achieves an overall accuracy of 92.86%, outperforming a convolutional neural network (CNN) baseline by a substantial margin (79.55% ). In addition, the model attains a 91.45% detection rate for children while maintaining a low false alarm rate of 6.14% .","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"278-289"},"PeriodicalIF":0.0,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11141026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SATVIO: Stereo Attention-Based Visual Inertial Odometry SATVIO:基于立体注意的视觉惯性里程计
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-08-21 DOI: 10.1109/JSAS.2025.3601056
Raoof Doorshi;Hajira Saleem;Reza Malekian
{"title":"SATVIO: Stereo Attention-Based Visual Inertial Odometry","authors":"Raoof Doorshi;Hajira Saleem;Reza Malekian","doi":"10.1109/JSAS.2025.3601056","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3601056","url":null,"abstract":"This study introduces a novel stereo attention-based visual inertial odometry model, namely, SATVIO, aiming to enhance odometry performance by leveraging deep learning techniques for sensor fusion. The research evaluates the SATVIO model against existing visual odometry methods using the KITTI odometry dataset, focusing on translational and rotational accuracy enhancements through innovative attention mechanisms and early fusion strategies. The proposed model integrates convolutional neural networks and long short-term memory networks to process and fuse data from stereo image inputs and inertial measurements effectively. SATVIO model particularly employs triplet attention and early fusion techniques with the aim of addressing the challenges posed by scale ambiguity and environmental changes. The results demonstrates that our proposed model outperforms traditional methods in specific configurations, thus demonstrating competitive or superior performance on key challenging sequences.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"259-265"},"PeriodicalIF":0.0,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11133748","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145027957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancements in Blood Group Classification: A Novel Approach Using Machine Learning and RF Sensing Technology 血型分类的进展:一种使用机器学习和射频传感技术的新方法
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-08-21 DOI: 10.1109/JSAS.2025.3601060
Malik Muhammad Arslan;Lei Guan;Xiaodong Yang;Nan Zhao;Abbas Ali Shah;Muhammad Bilal Khan;Mubashir Rehman;Syed Aziz Shah;Qammer H. Abbasi
{"title":"Advancements in Blood Group Classification: A Novel Approach Using Machine Learning and RF Sensing Technology","authors":"Malik Muhammad Arslan;Lei Guan;Xiaodong Yang;Nan Zhao;Abbas Ali Shah;Muhammad Bilal Khan;Mubashir Rehman;Syed Aziz Shah;Qammer H. Abbasi","doi":"10.1109/JSAS.2025.3601060","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3601060","url":null,"abstract":"Blood group classification is critical for enhancing the safety of blood transfusions, preventing transfusion-related complications, and facilitating emergency medical interventions and organ transplantation. Unlike traditional methods that require blood draws and chemical reagents, our approach analyzes the unique electromagnetic signatures of blood samples through radio frequency (RF) sensing at 1.2 GHz. We developed a custom software-defined radio (SDR) platform that captures subtle variations in orthogonal frequency-division multiplexing subcarriers, which are then processed by advanced machine learning algorithms including gradient boosting and random forest. Testing on 5840 samples across eight blood groups demonstrated remarkable 97.8% classification accuracy with results delivered in just 1.5 s—significantly faster than conventional 30–60 min laboratory methods. The system’s innovative integration of RF sensing and machine learning eliminates the need for reagents or physical contact while maintaining high precision, offering particular advantages for emergency situations and resource-limited settings. This work represents a paradigm shift in blood typing technology, combining the portability of SDR hardware with the analytical power of machine learning to create a faster safer alternative to traditional approaches. The demonstrated accuracy and speed suggest strong potential for clinical adoption in transfusion medicine and point-of-care diagnostics.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"266-277"},"PeriodicalIF":0.0,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11131658","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MATF-Net: Multiscale Attention With Tristream Fusion Network for Radar Modulation Recognition in S-Band MATF-Net: s波段雷达调制识别的多尺度关注三流融合网络
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-07-30 DOI: 10.1109/JSAS.2025.3594012
Fan Zhou;Jinyang Ren;Fanyu Xu;Yang Wang;Wei Wang;Peiying Zhang
{"title":"MATF-Net: Multiscale Attention With Tristream Fusion Network for Radar Modulation Recognition in S-Band","authors":"Fan Zhou;Jinyang Ren;Fanyu Xu;Yang Wang;Wei Wang;Peiying Zhang","doi":"10.1109/JSAS.2025.3594012","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3594012","url":null,"abstract":"Automatic modulation recognition of radar signals plays a crucial role in information-centric warfare and holds significant importance in military applications such as radar detection. In modern information-centric battlefields, the S-band (2–4 GHz) is widely utilized for tasks such as pulse radar detection due to its abundant spectral resources, excellent adaptability, and suitability for equipment miniaturization. However, under electromagnetic countermeasure conditions, radar signals within the S-band become dense and complex, making accurate modulation recognition particularly challenging. Existing methods often fail to adequately extract and fuse the multimodal features of signals, resulting in unreliable recognition performance under complex electromagnetic environments. Consequently, achieving robust AMR of radar signals under low signal-to-noise ratio (SNR) conditions has become critically important. To address the aforementioned challenges, this article proposes a multiscale attention with tristream fusion network to improve automatic modulation recognition of radar signals under low SNR conditions, aiming to mitigate issues such as feature ambiguity and noise interference. The proposed network comprises three main components: the three-stream feature extraction network (TFEN), the self-attention fusion network (SAFN), and the multiscale information fusion network (MIFN). Within TFEN, three specialized modules are designed—the spatial extraction module, the corresponding extraction module, and the temporal-compensation module. By parallelly extracting spatial features, amplitude and phase information, and temporal compensation features, TFEN effectively addresses the performance degradation issues typically encountered in low SNR scenarios. The SAFN and MIFN modules prioritize salient information across different modalities, compute interfeature correlations, and perform weighted fusion to enable dynamic selection and multiscale integration. This enhances the representational capacity of the fused features. Simulation results demonstrate that the proposed model achieves an average accuracy of 88.54% across SNR levels ranging from −20 dB to 20 dB, significantly outperforming existing methods and exhibiting superior adaptability.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"247-258"},"PeriodicalIF":0.0,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11104803","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Precision Smart Healthcare System With Deep Learning for Real-Time Radiographic Localization and Severity Assessment of Peri-Implantitis 用于种植体周围炎实时放射定位和严重程度评估的深度学习精密智能医疗系统
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-07-15 DOI: 10.1109/JSAS.2025.3588071
Chiung-An Chen;Ya-Yun Huang;Yi-Cheng Mao;Wei-Jiun Feng;Tsung-Yi Chen;Chen-Ye Ciou;Wei-Chen Tu;Patricia Angela R. Abu
{"title":"A Precision Smart Healthcare System With Deep Learning for Real-Time Radiographic Localization and Severity Assessment of Peri-Implantitis","authors":"Chiung-An Chen;Ya-Yun Huang;Yi-Cheng Mao;Wei-Jiun Feng;Tsung-Yi Chen;Chen-Ye Ciou;Wei-Chen Tu;Patricia Angela R. Abu","doi":"10.1109/JSAS.2025.3588071","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3588071","url":null,"abstract":"Peri-implantitis is a common complication associated with the growing use of dental implants. Clinicians often rely on periapical radiographs for its diagnosis. Recent studies have explored the use of image analysis and artificial intelligence (AI) to reduce the diagnostic workload and time. However, the low quality of periapical images and inconsistent angulation across serial radiographs complicate clinical assessment of peri-implant bone changes, making it challenging for AI to accurately evaluate the severity of peri-implantitis. To address this issue, this study proposes a novel system for the identification and localization of peri-implantitis using periapical radiographs. The study utilizes the YOLOv8 oriented bounding boxes (OBB) model to accurately identify dental implant locations, significantly improving localization accuracy (98.48%) compared to previous research. Since peri-implantitis is diagnosed unilaterally, the algorithm splits the implant in X-ray images to facilitate better analysis. Subsequent steps enhanced the visibility of symptoms by using histogram equalization and coloring the implant parts. The convolutional neural networks (CNN) model, particularly EfficientNet-B0, further improved the detection accuracy (94.05%). In addition, an AI-based method was introduced to assess the severity of peri-implantitis by classifying thread damage, achieving 90.48% accuracy. This deep learning approach using CNN models significantly reduces interpretation time for X-rays, easing the dentist’s workload, minimizing misdiagnosis risks, lowering healthcare costs, and benefiting more patients.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"222-231"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11079813","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144814087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Smart Sensing and Real-Time High-Fidelity Digital Twin for Building Management 面向楼宇管理的数据驱动智能传感和实时高保真数字孪生
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-07-14 DOI: 10.1109/JSAS.2025.3588827
Zhizhao Liang;Jagdeep Singh;Yichao Jin
{"title":"Data-Driven Smart Sensing and Real-Time High-Fidelity Digital Twin for Building Management","authors":"Zhizhao Liang;Jagdeep Singh;Yichao Jin","doi":"10.1109/JSAS.2025.3588827","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3588827","url":null,"abstract":"Achieving a high-fidelity digital twin of buildings is desirable but often requires a substantial influx of real-time data, necessitating a dense network of environmental sensors. In addition, high hardware expenses, deployment costs, constraints, and sensor malfunctions can impede the realization of such digital twins. In this article, we introduce <monospace>TwinSense</monospace>, a pioneering system that harnesses data-driven virtual sensing within a real-time high-fidelity 3-D digital twin of the building environment. Our innovative method utilizes machine learning (ML)-based inference to accurately estimate real-time sensor variables, such as temperature and <inline-formula><tex-math>$text{CO}_{2}$</tex-math></inline-formula> levels, across both 2-D (varying rooms) and 3-D (diverse elevations) domains, with limited reliance on physical sensors. By extending sensing coverage through ML-driven virtual smart sensing, we create a more accurate digital twin for advanced building management. Our case study results across different seasons indicate an average mean absolute percentage error of 2% for temperature and approximately 5% for air quality parameters, such as <inline-formula><tex-math>$text{CO}_{2}$</tex-math></inline-formula>, when compared against ground truth physical sensors deployed in the multiroom segmented building. Furthermore, we propose a modified 3D-inverse distance weighting thermal interpolation method. Leveraging the comprehensive multicamera-angle visualization capabilities facilitated by the Unreal Engine, our analysis revealed a potential anomaly within the air conditioning system. The building management team validated this observation during the real-world trial, affirming the initial efficacy of our solution.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"232-246"},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11079784","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144843095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Position Monitoring of Human Hands Using Cross-Slot Antennas and AI Integration for Hand Swing Activity Analysis 基于交叉槽天线和人工智能集成的手部位置监测及手部摆动分析
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-07-04 DOI: 10.1109/JSAS.2025.3586207
Shilpa Pavithran;Vineeta V Nair;Aravind S;Elizabeth George;Alex James
{"title":"Position Monitoring of Human Hands Using Cross-Slot Antennas and AI Integration for Hand Swing Activity Analysis","authors":"Shilpa Pavithran;Vineeta V Nair;Aravind S;Elizabeth George;Alex James","doi":"10.1109/JSAS.2025.3586207","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3586207","url":null,"abstract":"This work details the position monitoring of human hands using two similar cross-slot antennas operating in the frequency range of 2–3 GHz. Here, one antenna is kept on the chest and the other one is kept on the hand of a volunteer to monitor hand swing activity. The measured transmission values (<inline-formula><tex-math>$S_{21}$</tex-math></inline-formula>) along with corresponding frequencies from the antenna are used for generating synthetic <inline-formula><tex-math>$S_{21}$</tex-math></inline-formula> data using custom generative adversarial networks (GANs). Classification of data for two positions is performed on an artificial neural network (ANN), support vector machines (SVMs), decision tree (DT), and random forest. ANN gives an accuracy of 85% and is implemented on FPGA. The article also discusses the electromagnetic (EM) wave propagation around the human torso.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"212-221"},"PeriodicalIF":0.0,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144758164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Skeletal-Graph Reconstruction Using mmWave Radar and its Application for Human-Activity Recognition 毫米波雷达鲁棒骨架图重建及其在人体活动识别中的应用
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-06-19 DOI: 10.1109/JSAS.2025.3581498
Ta-Wei Wu;Shih-Hau Fang;Hsiao-Chun Wu;Guannan Liu;Kun Yan
{"title":"Robust Skeletal-Graph Reconstruction Using mmWave Radar and its Application for Human-Activity Recognition","authors":"Ta-Wei Wu;Shih-Hau Fang;Hsiao-Chun Wu;Guannan Liu;Kun Yan","doi":"10.1109/JSAS.2025.3581498","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3581498","url":null,"abstract":"Skeletal graphs can represent concise and reliable features for human-activity recognition in recent years. However, they have to be acquired by Kinect sensors or regular cameras, which rely on sufficient lighting. Meanwhile, skeletal graphs can only be created from the front views of sensors and cameras in the absence of any obstacle. The above stated restrictions limit the practical applicability of skeletal graphs. Therefore, in this work, we would like to investigate robust skeletal-graph reconstruction using milimeter-wave (mmWave) radar. The mmWave radar, which does not require light-of-sight propagation for data acquisition, can be equipped anywhere in room and operates in darkness so that it can overcome the aforementioned drawbacks. In this work, we propose to utilize the double-view cumulative numbers of radar-cloud points, temporal differentials in cumulative numbers of radar-cloud points, and Doppler velocities as the input features and adopt the deep-learning network integrating convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM). To fully investigate the effectiveness of our proposed new deep-learning network for robust skeletal-graph reconstruction, we evaluate the reconstruction accuracies in terms of mean absolute errorssubject to the human-location and human-orientation mismatches between the training and testing stages. Furthermore, we also investigate the advantage of our proposed novel robust skeletal-graph reconstruction approach in human-activity recognition since human-activity recognition turns out to be a primary application of skeletal graphs. We also compare the performances of our proposed new approach and two prevalent methods, namely, mmPose-natural language processing and BiLSTM in conjunction with CNN using the 3-D coordinates, signal-to-noise ratios, and Doppler velocites as the input features. Our experiments show that our proposed new approach outperforms the aforementioned two existing methods in both skeletal-graph reconstruction and human-activity recognition.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"199-211"},"PeriodicalIF":0.0,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11045161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144606399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Instrumenting a Virtual Reality Headset to Monitor Changes in Electroencephalograms of PTSD Patients During Multisensory Immersion 使用虚拟现实耳机监测PTSD患者多感官沉浸时脑电图的变化
IEEE Journal of Selected Areas in Sensors Pub Date : 2025-03-24 DOI: 10.1109/JSAS.2025.3554131
Belmir J. de Jesus;Marilia K. S. Lopes;Léa Perreault;Marie-Claude Roberge;Alcyr A. Oliveira;Tiago H. Falk
{"title":"Instrumenting a Virtual Reality Headset to Monitor Changes in Electroencephalograms of PTSD Patients During Multisensory Immersion","authors":"Belmir J. de Jesus;Marilia K. S. Lopes;Léa Perreault;Marie-Claude Roberge;Alcyr A. Oliveira;Tiago H. Falk","doi":"10.1109/JSAS.2025.3554131","DOIUrl":"https://doi.org/10.1109/JSAS.2025.3554131","url":null,"abstract":"Virtual reality (VR) has emerged as a promising tool to help treat posttraumatic stress disorder (PTSD) symptoms, as well as help patients manage their anxiety. More recently, multisensory immersive experiences involving audio-visual-olfactory stimuli have been shown to lead to improved relaxation states. Despite these advances, very little is still known about the psychophysiological changes resulting from these interventions, and outcomes need to be monitored via questionnaires and interviews at the end of the intervention. In this article, we propose to instrument a VR headset with several biosensors to allow for the tracking of neural changes throughout the intervention, as well as track the progress of different neuromarkers, namely powers across the five conventional electroencephalogram (EEG) frequency subbands computed at the frontal, central, parietal, and occipital areas of the brain. In total, 20 participants diagnosed with PTSD by their medical doctors took part in the experiment and underwent a 12-session multisensory nature immersion protocol. We show the changes that were observed for those who benefited and those who did not benefit from the intervention, leading to insights on potential new markers of intervention outcomes that could save patients and medical professionals time and resources. The proposed headset also allowed for changes in arousal states and EEG patterns to be tracked, thus providing additional insights on the disorder, as well as the effects of the intervention on patient symptoms.","PeriodicalId":100622,"journal":{"name":"IEEE Journal of Selected Areas in Sensors","volume":"2 ","pages":"150-161"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10938306","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143839830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信