IEEE Sensors Journal最新文献

筛选
英文 中文
Identifying the Respiratory Sound Based on Single-Channel Separation and Hyperdimensional Computing 基于单通道分离和超维计算的呼吸声识别
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-03 DOI: 10.1109/JSEN.2025.3557909
Jie Zheng;Yixuan Wang;Jinglong Niu;Yan Shi;Fei Xie
{"title":"Identifying the Respiratory Sound Based on Single-Channel Separation and Hyperdimensional Computing","authors":"Jie Zheng;Yixuan Wang;Jinglong Niu;Yan Shi;Fei Xie","doi":"10.1109/JSEN.2025.3557909","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3557909","url":null,"abstract":"In intensive care units (ICUs), efficient respiratory management, particularly sputum suction in weakened patients, is critical. Traditional stethoscope-based methods for respiratory sound analysis in tracheal sputum assessment are time-consuming and often struggle to differentiate between cardiac and respiratory sounds, affecting sputum detection accuracy. To address these issues, we propose identifying respiratory sound based on single-channel separation and hyperdimensional computing (IRS-SSHC). Specifically, the proposed method first employs an encoder-decoder framework to effectively separate heart and respiratory sounds in the time domain. Then, it segments respiratory sounds using short-duration energy, where each segment is represented by a 1024-D vector space. Next, it utilizes light gradient boosting machine (LightGBM) based on the vector space for classification. Experimental results show that the classification ACC of IRS-SSHC is 97.9%, which outperforms existing methods.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 13","pages":"24626-24633"},"PeriodicalIF":4.3,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144550219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-Domain Feature Interaction Combined With Multiscale Attention for Remote Sensing Change Detection 结合多尺度关注的频域特征交互遥感变化检测
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-03 DOI: 10.1109/JSEN.2025.3583301
Zhongxiang Xie;Shuangxi Miao;Zhewei Zhang;Xuecao Li;Jianxi Huang
{"title":"Frequency-Domain Feature Interaction Combined With Multiscale Attention for Remote Sensing Change Detection","authors":"Zhongxiang Xie;Shuangxi Miao;Zhewei Zhang;Xuecao Li;Jianxi Huang","doi":"10.1109/JSEN.2025.3583301","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3583301","url":null,"abstract":"Change detection (CD) in remote sensing images has seen significant advancement due to the powerful discriminative capabilities of deep convolutional networks. However, the domain gap and pseudo-changes between the bi-temporal images, caused by variations in imaging conditions such as illumination, shadow, and background, remain a challenge. Furthermore, multiscale variations in complex scenes complicate the accurate identification of change regions and their boundary delineation. To address these issues, this article introduces the frequency-domain feature interaction and multiscale attention mechanism network (FIMANet). Specifically, to mitigate the impact of pseudo-change interference, the FIMANet reduces the domain gap and facilitates information coupling within intralevel representations through frequency-domain feature interaction (FDFI). To prevent information loss and noise introduction, a multiple kernel inception (MKI) module is devised to capture multiscale features and perform progressive fusion. Finally, to enhance the extraction of changes in scale-sensitive regions, the FIMANet constructs a cross-scale feature aggregator (CSFA) module, composed of attention at various scales and a transformer, to capture fine-grained details and global dependencies. Comparative experiments with nine methods on three commonly used datasets validate the effectiveness of FIMANet, achieving the highest <inline-formula> <tex-math>${F}1$ </tex-math></inline-formula>-score of 73.98% on the CLCD dataset, 90.55% on the WHU-CD, and 91.01% on the LEVIR-CD. The code is available at <uri>https://github.com/zxXie-Air/FIMANet</uri>","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"29284-29295"},"PeriodicalIF":4.3,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144751114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Subtraction-Convolution Network via Adaptive Sparse Feature Extraction for Interpretable Intelligent Edge Diagnosis 基于自适应稀疏特征提取的轻量级减-卷积网络可解释智能边缘诊断
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-03 DOI: 10.1109/JSEN.2025.3583820
Qihang Wu;Zhiming Wang;Yuanyuan Xu;Wenbin Huang;Xiaoxi Ding
{"title":"A Lightweight Subtraction-Convolution Network via Adaptive Sparse Feature Extraction for Interpretable Intelligent Edge Diagnosis","authors":"Qihang Wu;Zhiming Wang;Yuanyuan Xu;Wenbin Huang;Xiaoxi Ding","doi":"10.1109/JSEN.2025.3583820","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3583820","url":null,"abstract":"From the perspective of signal processing collaborated with deep learning (DL), the interpretability of the features separated from the DL model is an important factor affecting reliability and accuracy. Considering the challenges of large amount of data transmission and large-size model deployment in real-time fault diagnosis, this study proposed a lightweight subtraction-convolution network (SCN) for industrial intelligent edge fault diagnosis. An array of randomly initialized sparse kernels (SKs) is designed to interpretably achieve adaptive sparse spectrum feature separation with the <inline-formula> <tex-math>${L} ^{{1}}$ </tex-math></inline-formula> regularization constraint introduced. Additionally, the depthwise separable convolution (DSC) is subsequently employed as a substitute for the conventional convolution operation to diminish computational burden and design a more lightweight model named SCN-L. Self-made extensive experiments indicated that the proposed SCN and SCN-L have shown great lightweight performance, high accuracy effect, and interpretability. A public dataset is used to illustrate the generalizability of the proposed model. Furthermore, an intelligent edge diagnosis node (EDN) hardware with SCN-L is designed to implement efficient industrial intelligent edge diagnosis. The experimental results show that the proposed model performs efficiently in edge diagnosis, indicating great potential for industrial application.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"29325-29335"},"PeriodicalIF":4.3,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144751025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Molecular Imprinted Quartz Crystal Microbalance Sensor for Reliable Detection of Alpha-Terpineol in Various Pine Essential Oils 分子印迹石英晶体微平衡传感器可靠检测各种松木精油中α -松油醇
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-03 DOI: 10.1109/JSEN.2025.3583871
Deepam Gangopadhyay;Sumit Kundu;Mahuya Bhattacharyya Banerjee;Shreya Nag;Panchanan Pramanik;Runu Banerjee Roy
{"title":"A Molecular Imprinted Quartz Crystal Microbalance Sensor for Reliable Detection of Alpha-Terpineol in Various Pine Essential Oils","authors":"Deepam Gangopadhyay;Sumit Kundu;Mahuya Bhattacharyya Banerjee;Shreya Nag;Panchanan Pramanik;Runu Banerjee Roy","doi":"10.1109/JSEN.2025.3583871","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3583871","url":null,"abstract":"<inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>-terpineol (A-Te), a bioactive monoterpenoid, is widely used in cosmetics and aroma therapy for its appeasing fragrance and flavor. Recent studies have acknowledged its immense potential in biological applications and naturopathy. This study aims to develop a low-cost detection mechanism of A-Te using a sensitive quartz crystal microbalance (QCM) sensor, employing a highly selective molecularly imprinted polymer (MIP) of methyl methacrylate (MMA) and acrylic acid (AA). Frequency deviation of the sensor has been utilized for A-Te detection in four commercial-grade pine essential oils (PEOs). This has yielded a remarkable sensitivity of 0.149 Hz/ppm with a wide linear range of 5–800 ppm. Reliability of the sensor has been assessed in terms of a reproducibility and repeatability study, showcasing promising values of 90.70% and 92.32%, respectively. The limit of detection (LOD) has been achieved at 1.33 ppm. Polymer characterization and surface morphology of the sensor have been analyzed through Fourier transform infrared spectroscopy (FTIR) and scanning electron microscope (SEM), respectively. Furthermore, responses obtained from PEO samples were correlated with the conventional gas chromatographic method using principal component regression (PCR) and random forest regression (RFR) models. Notably, a high prediction accuracy (96.38%) has been achieved from PCR.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"27966-27973"},"PeriodicalIF":4.3,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144758291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Sensors Council IEEE传感器委员会
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-03 DOI: 10.1109/JSEN.2025.3580308
{"title":"IEEE Sensors Council","authors":"","doi":"10.1109/JSEN.2025.3580308","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3580308","url":null,"abstract":"","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 13","pages":"C3-C3"},"PeriodicalIF":4.3,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11069378","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144557830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Multimodal Recognition Methods for Perimeter Security Based on the Fusion of DVS and Video Surveillance 基于分布式交换机与视频监控融合的周界安防多模态识别方法研究
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-02 DOI: 10.1109/JSEN.2025.3582973
Wei Zhao;Shaodong Jiang;Yang Zhao;Faxiang Zhang
{"title":"Research on Multimodal Recognition Methods for Perimeter Security Based on the Fusion of DVS and Video Surveillance","authors":"Wei Zhao;Shaodong Jiang;Yang Zhao;Faxiang Zhang","doi":"10.1109/JSEN.2025.3582973","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3582973","url":null,"abstract":"Perimeter security systems widely employ distributed fiber-optic sensing technology and video surveillance as sensing means. However, significant limitations remain in practical applications. Distributed fiber-optic sensing technology is susceptible to interference from environmental noise coupling, resulting in a high false alarm rate. Meanwhile, video surveillance technology faces issues such as increased image noise and blurred target outlines in complex environments. These problems are compounded by the complexity of the background, which makes it difficult to accurately identify subtle behavioral differences. To address these challenges, this article proposes a multimodal fusion classification model, HMFusionNet, which leverages the complementary information from distributed vibration sensing (DVS) and video surveillance to improve classification accuracy. First, we introduce the CGANet module to extract features from 1-D fiber vibration signals and capture the periodic characteristics of the fiber time series. Second, we design the PoseMobiNet module to extract 2-D image features based on human keypoint data and RGB image information, addressing the complexities of the perimeter security background and the subtleties of behavioral differences among intruders. During the feature fusion stage, we propose a probabilistic weighting-based late fusion strategy to integrate decision-level information from both modalities. Finally, using a multimodal dataset constructed based on a real-world perimeter security scenario, the HMFusionNet achieves a detection accuracy of 97.7% with a recognition time of less than 0.1 s.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"29213-29220"},"PeriodicalIF":4.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144751101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Sensitivity Fiber Bragg Grating Pressure Sensor With a Hinged-Lever Structure 铰链杠杆结构的高灵敏度光纤光栅压力传感器
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-02 DOI: 10.1109/JSEN.2025.3583300
Qiang Liu;Shuhui Wei;Shenglong Gu;Jian Han;Chao Ma;Pengfei Lu;Jingwei Lv;Paul K. Chu;Chao Liu
{"title":"High-Sensitivity Fiber Bragg Grating Pressure Sensor With a Hinged-Lever Structure","authors":"Qiang Liu;Shuhui Wei;Shenglong Gu;Jian Han;Chao Ma;Pengfei Lu;Jingwei Lv;Paul K. Chu;Chao Liu","doi":"10.1109/JSEN.2025.3583300","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3583300","url":null,"abstract":"This article presents a high-sensitivity fiber Bragg grating (FBG) pressure sensor with a metal diaphragm and hinge-lever structure designed for small-range pressure measurement. The sensor employs hinge groups and dual-lever structure to amplify the small strain induced by diaphragm deformation, thereby enhancing sensitivity. The sensor structure is analyzed and optimized by the finite element method. The sensor is fabricated and tested on a pressure calibration platform. The experimental data show that the pressure sensitivity of the sensor is 3.382 pm/kPa in the range of 0–1 MPa, and the correlation coefficient is 0.9999. Another FBG is employed to compensate for the influence of temperature with a sensitivity of 12.14 pm/°C in the range of <inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C–<inline-formula> <tex-math>$70~^{circ }$ </tex-math></inline-formula>C and the correlation coefficient of 0.9998. In addition, the sensor is capable of maintaining stable pressure measurements within the temperature range of <inline-formula> <tex-math>$25~^{circ }$ </tex-math></inline-formula>C–<inline-formula> <tex-math>$55~^{circ }$ </tex-math></inline-formula>C. The sensor with high sensitivity and stability is suitable for low-pressure, high-sensitivity detection.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"28314-28322"},"PeriodicalIF":4.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144758185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual-Inertial State Estimation Based on Chebyshev Polynomial Optimization 基于切比雪夫多项式优化的视觉惯性状态估计
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-02 DOI: 10.1109/JSEN.2025.3583221
Hongyu Zhang;Maoran Zhu;Qi Cai;Yuanxin Wu
{"title":"Visual-Inertial State Estimation Based on Chebyshev Polynomial Optimization","authors":"Hongyu Zhang;Maoran Zhu;Qi Cai;Yuanxin Wu","doi":"10.1109/JSEN.2025.3583221","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3583221","url":null,"abstract":"Visual-inertial navigation systems (VINS) are essential across various applications. Traditional optimization-based VINS mainly relies on the preintegration method for integrating inertial measurements. While this method avoids recalculating inertial integration during optimization by generating relative pose constraints, it compromises the quasi-Gaussian nature of the original measurements. Additionally, the constraints need to be updated through linearization after biases change. To address these problems, this article proposes a visual-inertial fusion method based on Chebyshev polynomial optimization. The proposed method directly incorporates the original inertial measurements into the objective function, thereby maintaining the quasi-Gaussian properties of the inertial measurements and the additive nature of the biases. Specifically, it represents the continuous navigation state using Chebyshev polynomials and has the unknown coefficients determined by minimizing weighted residuals of initial conditions, dynamics, and measurements. Simulations and experiments on public datasets demonstrate that the proposed method significantly improves batch optimization accuracy. It achieves approximately 40% improvement in velocity and about 50% improvement in position over the preintegration method.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"29618-29629"},"PeriodicalIF":4.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144758160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIDPT: Dense Interaction Deep Prompt RGBT Tracking dpt:密集交互深度提示右跟踪
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-02 DOI: 10.1109/JSEN.2025.3583417
Muyang Li;Xiwen Ren;Guangwen Luo;Haofei Zhang;Ruqian Hao;Juanxiu Liu;Lin Liu;Ping Zhang
{"title":"DIDPT: Dense Interaction Deep Prompt RGBT Tracking","authors":"Muyang Li;Xiwen Ren;Guangwen Luo;Haofei Zhang;Ruqian Hao;Juanxiu Liu;Lin Liu;Ping Zhang","doi":"10.1109/JSEN.2025.3583417","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3583417","url":null,"abstract":"Existing RGB-infrared object tracking methods struggle with effectively fusing data from both modalities, further hindered by the magnitude disparity between them. While large-parameter models in RGB tracking demonstrate robustness on extensive datasets, their performance remains underutilized when incorporating infrared data. To address these challenges, this article proposes a deep prompt learning method based on dense interaction to enhance RGB-infrared fusion and leverage the strengths of large models in RGBT object tracking. We treat infrared information as a prompt for the tracker and freeze the pretrained parameters of the RGB backbone model. During the initial feature extraction phase of the backbone model, a dense infrared prompt interaction encoder is employed to integrate infrared information. Subsequently, we introduce learnable prompts in the Transformer module while freezing the parameters of the Transformer encoder layers, updating only the parameters of the learnable prompts and the fully connected operation layer to enhance the model’s capacity to learn information after expanding to an additional modality. This approach requires updating only 2.8% of the parameters in the model during training, thereby saving computational resources. Extensive experiments conducted on widely tested datasets RGBT234 and LasHeR demonstrate the effectiveness of the proposed method. Overall, our approach better integrates RGB and infrared images and introduces prompt learning to address the issue of magnitude imbalance in the data, providing a promising solution to the challenges in RGBT object tracking.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"29310-29324"},"PeriodicalIF":4.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel Information Exchange-Induced Spatiotemporal Graph Convolutional Network 信道信息交换诱导的时空图卷积网络
IF 4.3 2区 综合性期刊
IEEE Sensors Journal Pub Date : 2025-07-02 DOI: 10.1109/JSEN.2025.3582952
Yuan Xu;Fan Qin;Yi Luo;Wei Ke;Qun-Xiong Zhu;Yan-Lin He;Yang Zhang;Ming-Qing Zhang
{"title":"Channel Information Exchange-Induced Spatiotemporal Graph Convolutional Network","authors":"Yuan Xu;Fan Qin;Yi Luo;Wei Ke;Qun-Xiong Zhu;Yan-Lin He;Yang Zhang;Ming-Qing Zhang","doi":"10.1109/JSEN.2025.3582952","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3582952","url":null,"abstract":"Accurately predicting traffic flow is a highly crucial task essential for providing functional services to urban road networks. Urban traffic is a complex and constantly evolving system, influenced not only by factors within individual regions but also by interactions among different regions within the entire city network. The majority of current traffic flow prediction methods rely on static geographic information, thus ignoring the cross-regional flow of traffic within cities. To tackle this issue, this article proposes a channel information exchange-induced spatiotemporal graph convolutional network (CIE-STGCN). This network constructs both a static adjacency matrix constructed from geographic information and a dynamic adjacency matrix based on adaptive parameter learning for nodes. The static and dynamic STGCNs operate on separate channels to extract features. Additionally, a channel information exchange module based on channel attention and gating mechanisms is designed to achieve global complementarity of static and dynamic features in traffic flow data. Validation using multiple real-world traffic flow datasets demonstrates the efficacy of the proposed model in reliably predicting traffic flow.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 15","pages":"29262-29270"},"PeriodicalIF":4.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信